00:00:00.001 Started by upstream project "autotest-per-patch" build number 127167 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.002 Started by upstream project "autotest-per-patch" build number 127157 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.012 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.012 The recommended git tool is: git 00:00:00.013 using credential 00000000-0000-0000-0000-000000000002 00:00:00.015 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.027 Fetching changes from the remote Git repository 00:00:00.029 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.042 Using shallow fetch with depth 1 00:00:00.042 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.042 > git --version # timeout=10 00:00:00.059 > git --version # 'git version 2.39.2' 00:00:00.059 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.092 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.092 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/32/24332/1 # timeout=5 00:00:02.533 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.545 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.556 Checking out Revision 3a32004aa56235495bc61219c6f66bfa82f61e74 (FETCH_HEAD) 00:00:02.556 > git config core.sparsecheckout # timeout=10 00:00:02.568 > git read-tree -mu HEAD # timeout=10 00:00:02.584 > git checkout -f 3a32004aa56235495bc61219c6f66bfa82f61e74 # timeout=5 00:00:02.602 Commit message: "jjb/autotest: add SPDK_TEST_RAID flag for autotest jobs" 00:00:02.603 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:02.708 [Pipeline] Start of Pipeline 00:00:02.719 [Pipeline] library 00:00:02.720 Loading library shm_lib@master 00:00:02.720 Library shm_lib@master is cached. Copying from home. 00:00:02.735 [Pipeline] node 00:00:02.742 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.744 [Pipeline] { 00:00:02.756 [Pipeline] catchError 00:00:02.757 [Pipeline] { 00:00:02.772 [Pipeline] wrap 00:00:02.780 [Pipeline] { 00:00:02.786 [Pipeline] stage 00:00:02.787 [Pipeline] { (Prologue) 00:00:02.806 [Pipeline] echo 00:00:02.807 Node: VM-host-SM17 00:00:02.814 [Pipeline] cleanWs 00:00:02.823 [WS-CLEANUP] Deleting project workspace... 00:00:02.823 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.829 [WS-CLEANUP] done 00:00:02.985 [Pipeline] setCustomBuildProperty 00:00:03.056 [Pipeline] httpRequest 00:00:03.078 [Pipeline] echo 00:00:03.079 Sorcerer 10.211.164.101 is alive 00:00:03.087 [Pipeline] httpRequest 00:00:03.091 HttpMethod: GET 00:00:03.092 URL: http://10.211.164.101/packages/jbp_3a32004aa56235495bc61219c6f66bfa82f61e74.tar.gz 00:00:03.092 Sending request to url: http://10.211.164.101/packages/jbp_3a32004aa56235495bc61219c6f66bfa82f61e74.tar.gz 00:00:03.094 Response Code: HTTP/1.1 200 OK 00:00:03.094 Success: Status code 200 is in the accepted range: 200,404 00:00:03.094 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_3a32004aa56235495bc61219c6f66bfa82f61e74.tar.gz 00:00:03.237 [Pipeline] sh 00:00:03.515 + tar --no-same-owner -xf jbp_3a32004aa56235495bc61219c6f66bfa82f61e74.tar.gz 00:00:03.530 [Pipeline] httpRequest 00:00:03.547 [Pipeline] echo 00:00:03.548 Sorcerer 10.211.164.101 is alive 00:00:03.556 [Pipeline] httpRequest 00:00:03.560 HttpMethod: GET 00:00:03.560 URL: http://10.211.164.101/packages/spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:00:03.561 Sending request to url: http://10.211.164.101/packages/spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:00:03.562 Response Code: HTTP/1.1 200 OK 00:00:03.562 Success: Status code 200 is in the accepted range: 200,404 00:00:03.563 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:00:21.330 [Pipeline] sh 00:00:21.610 + tar --no-same-owner -xf spdk_86fd5638bafa503cd3ee77ac82f66dbd02cc266c.tar.gz 00:00:24.907 [Pipeline] sh 00:00:25.184 + git -C spdk log --oneline -n5 00:00:25.184 86fd5638b autotest: reduce RAID tests runs 00:00:25.184 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:25.184 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:25.184 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:25.184 d005e023b raid: fix empty slot not updated in sb after resize 00:00:25.204 [Pipeline] writeFile 00:00:25.218 [Pipeline] sh 00:00:25.492 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:25.503 [Pipeline] sh 00:00:25.783 + cat autorun-spdk.conf 00:00:25.783 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.783 SPDK_RUN_ASAN=1 00:00:25.783 SPDK_RUN_UBSAN=1 00:00:25.783 SPDK_TEST_RAID=1 00:00:25.783 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.789 RUN_NIGHTLY=0 00:00:25.792 [Pipeline] } 00:00:25.807 [Pipeline] // stage 00:00:25.819 [Pipeline] stage 00:00:25.821 [Pipeline] { (Run VM) 00:00:25.834 [Pipeline] sh 00:00:26.111 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:26.112 + echo 'Start stage prepare_nvme.sh' 00:00:26.112 Start stage prepare_nvme.sh 00:00:26.112 + [[ -n 0 ]] 00:00:26.112 + disk_prefix=ex0 00:00:26.112 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:26.112 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:26.112 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:26.112 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.112 ++ SPDK_RUN_ASAN=1 00:00:26.112 ++ SPDK_RUN_UBSAN=1 00:00:26.112 ++ SPDK_TEST_RAID=1 00:00:26.112 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.112 ++ RUN_NIGHTLY=0 00:00:26.112 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:26.112 + nvme_files=() 00:00:26.112 + declare -A nvme_files 00:00:26.112 + backend_dir=/var/lib/libvirt/images/backends 00:00:26.112 + nvme_files['nvme.img']=5G 00:00:26.112 + nvme_files['nvme-cmb.img']=5G 00:00:26.112 + nvme_files['nvme-multi0.img']=4G 00:00:26.112 + nvme_files['nvme-multi1.img']=4G 00:00:26.112 + nvme_files['nvme-multi2.img']=4G 00:00:26.112 + nvme_files['nvme-openstack.img']=8G 00:00:26.112 + nvme_files['nvme-zns.img']=5G 00:00:26.112 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:26.112 + (( SPDK_TEST_FTL == 1 )) 00:00:26.112 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:26.112 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:26.112 + for nvme in "${!nvme_files[@]}" 00:00:26.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:26.112 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.112 + for nvme in "${!nvme_files[@]}" 00:00:26.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:26.112 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.112 + for nvme in "${!nvme_files[@]}" 00:00:26.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:26.112 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:26.112 + for nvme in "${!nvme_files[@]}" 00:00:26.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:26.112 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.112 + for nvme in "${!nvme_files[@]}" 00:00:26.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:26.112 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.112 + for nvme in "${!nvme_files[@]}" 00:00:26.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:26.112 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.112 + for nvme in "${!nvme_files[@]}" 00:00:26.112 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:26.112 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.112 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:26.112 + echo 'End stage prepare_nvme.sh' 00:00:26.112 End stage prepare_nvme.sh 00:00:26.122 [Pipeline] sh 00:00:26.399 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:26.399 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:00:26.399 00:00:26.399 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:26.399 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:26.399 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:26.399 HELP=0 00:00:26.399 DRY_RUN=0 00:00:26.399 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:26.399 NVME_DISKS_TYPE=nvme,nvme, 00:00:26.399 NVME_AUTO_CREATE=0 00:00:26.399 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:26.399 NVME_CMB=,, 00:00:26.399 NVME_PMR=,, 00:00:26.399 NVME_ZNS=,, 00:00:26.399 NVME_MS=,, 00:00:26.399 NVME_FDP=,, 00:00:26.399 SPDK_VAGRANT_DISTRO=fedora38 00:00:26.399 SPDK_VAGRANT_VMCPU=10 00:00:26.399 SPDK_VAGRANT_VMRAM=12288 00:00:26.399 SPDK_VAGRANT_PROVIDER=libvirt 00:00:26.399 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:26.399 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:26.399 SPDK_OPENSTACK_NETWORK=0 00:00:26.399 VAGRANT_PACKAGE_BOX=0 00:00:26.399 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:26.399 FORCE_DISTRO=true 00:00:26.399 VAGRANT_BOX_VERSION= 00:00:26.399 EXTRA_VAGRANTFILES= 00:00:26.399 NIC_MODEL=e1000 00:00:26.399 00:00:26.399 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora38-libvirt' 00:00:26.399 /var/jenkins/workspace/raid-vg-autotest/fedora38-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:29.679 Bringing machine 'default' up with 'libvirt' provider... 00:00:30.244 ==> default: Creating image (snapshot of base box volume). 00:00:30.503 ==> default: Creating domain with the following settings... 00:00:30.503 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721905905_f68e7e3a9b7884a0cfe0 00:00:30.503 ==> default: -- Domain type: kvm 00:00:30.503 ==> default: -- Cpus: 10 00:00:30.503 ==> default: -- Feature: acpi 00:00:30.503 ==> default: -- Feature: apic 00:00:30.503 ==> default: -- Feature: pae 00:00:30.503 ==> default: -- Memory: 12288M 00:00:30.503 ==> default: -- Memory Backing: hugepages: 00:00:30.503 ==> default: -- Management MAC: 00:00:30.503 ==> default: -- Loader: 00:00:30.503 ==> default: -- Nvram: 00:00:30.503 ==> default: -- Base box: spdk/fedora38 00:00:30.503 ==> default: -- Storage pool: default 00:00:30.503 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721905905_f68e7e3a9b7884a0cfe0.img (20G) 00:00:30.503 ==> default: -- Volume Cache: default 00:00:30.503 ==> default: -- Kernel: 00:00:30.503 ==> default: -- Initrd: 00:00:30.503 ==> default: -- Graphics Type: vnc 00:00:30.503 ==> default: -- Graphics Port: -1 00:00:30.503 ==> default: -- Graphics IP: 127.0.0.1 00:00:30.503 ==> default: -- Graphics Password: Not defined 00:00:30.503 ==> default: -- Video Type: cirrus 00:00:30.503 ==> default: -- Video VRAM: 9216 00:00:30.503 ==> default: -- Sound Type: 00:00:30.503 ==> default: -- Keymap: en-us 00:00:30.503 ==> default: -- TPM Path: 00:00:30.503 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:30.503 ==> default: -- Command line args: 00:00:30.503 ==> default: -> value=-device, 00:00:30.503 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:30.503 ==> default: -> value=-drive, 00:00:30.503 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:30.503 ==> default: -> value=-device, 00:00:30.503 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.503 ==> default: -> value=-device, 00:00:30.503 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:30.503 ==> default: -> value=-drive, 00:00:30.503 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:30.503 ==> default: -> value=-device, 00:00:30.503 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.503 ==> default: -> value=-drive, 00:00:30.503 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:30.503 ==> default: -> value=-device, 00:00:30.503 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.503 ==> default: -> value=-drive, 00:00:30.503 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:30.503 ==> default: -> value=-device, 00:00:30.503 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:30.503 ==> default: Creating shared folders metadata... 00:00:30.503 ==> default: Starting domain. 00:00:32.404 ==> default: Waiting for domain to get an IP address... 00:00:47.315 ==> default: Waiting for SSH to become available... 00:00:48.248 ==> default: Configuring and enabling network interfaces... 00:00:52.431 default: SSH address: 192.168.121.88:22 00:00:52.431 default: SSH username: vagrant 00:00:52.431 default: SSH auth method: private key 00:00:54.331 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:02.457 ==> default: Mounting SSHFS shared folder... 00:01:03.023 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:03.023 ==> default: Checking Mount.. 00:01:04.398 ==> default: Folder Successfully Mounted! 00:01:04.398 ==> default: Running provisioner: file... 00:01:04.964 default: ~/.gitconfig => .gitconfig 00:01:05.222 00:01:05.222 SUCCESS! 00:01:05.222 00:01:05.222 cd to /var/jenkins/workspace/raid-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:05.222 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:05.222 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:05.222 00:01:05.231 [Pipeline] } 00:01:05.250 [Pipeline] // stage 00:01:05.259 [Pipeline] dir 00:01:05.260 Running in /var/jenkins/workspace/raid-vg-autotest/fedora38-libvirt 00:01:05.262 [Pipeline] { 00:01:05.277 [Pipeline] catchError 00:01:05.279 [Pipeline] { 00:01:05.293 [Pipeline] sh 00:01:05.570 + vagrant ssh-config --host vagrant 00:01:05.571 + sed -ne /^Host/,$p 00:01:05.571 + tee ssh_conf 00:01:09.755 Host vagrant 00:01:09.755 HostName 192.168.121.88 00:01:09.755 User vagrant 00:01:09.755 Port 22 00:01:09.755 UserKnownHostsFile /dev/null 00:01:09.755 StrictHostKeyChecking no 00:01:09.755 PasswordAuthentication no 00:01:09.755 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:09.755 IdentitiesOnly yes 00:01:09.755 LogLevel FATAL 00:01:09.755 ForwardAgent yes 00:01:09.755 ForwardX11 yes 00:01:09.755 00:01:09.768 [Pipeline] withEnv 00:01:09.771 [Pipeline] { 00:01:09.786 [Pipeline] sh 00:01:10.063 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:10.063 source /etc/os-release 00:01:10.063 [[ -e /image.version ]] && img=$(< /image.version) 00:01:10.063 # Minimal, systemd-like check. 00:01:10.063 if [[ -e /.dockerenv ]]; then 00:01:10.063 # Clear garbage from the node's name: 00:01:10.063 # agt-er_autotest_547-896 -> autotest_547-896 00:01:10.063 # $HOSTNAME is the actual container id 00:01:10.063 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:10.063 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:10.063 # We can assume this is a mount from a host where container is running, 00:01:10.063 # so fetch its hostname to easily identify the target swarm worker. 00:01:10.063 container="$(< /etc/hostname) ($agent)" 00:01:10.063 else 00:01:10.063 # Fallback 00:01:10.063 container=$agent 00:01:10.063 fi 00:01:10.063 fi 00:01:10.063 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:10.063 00:01:10.367 [Pipeline] } 00:01:10.386 [Pipeline] // withEnv 00:01:10.395 [Pipeline] setCustomBuildProperty 00:01:10.411 [Pipeline] stage 00:01:10.413 [Pipeline] { (Tests) 00:01:10.431 [Pipeline] sh 00:01:10.713 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:10.985 [Pipeline] sh 00:01:11.264 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:11.538 [Pipeline] timeout 00:01:11.539 Timeout set to expire in 1 hr 30 min 00:01:11.541 [Pipeline] { 00:01:11.558 [Pipeline] sh 00:01:11.837 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:12.405 HEAD is now at 86fd5638b autotest: reduce RAID tests runs 00:01:12.420 [Pipeline] sh 00:01:12.700 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:12.974 [Pipeline] sh 00:01:13.254 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:13.531 [Pipeline] sh 00:01:13.812 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:14.070 ++ readlink -f spdk_repo 00:01:14.070 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:14.070 + [[ -n /home/vagrant/spdk_repo ]] 00:01:14.070 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:14.070 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:14.070 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:14.070 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:14.070 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:14.070 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:14.070 + cd /home/vagrant/spdk_repo 00:01:14.070 + source /etc/os-release 00:01:14.070 ++ NAME='Fedora Linux' 00:01:14.070 ++ VERSION='38 (Cloud Edition)' 00:01:14.070 ++ ID=fedora 00:01:14.070 ++ VERSION_ID=38 00:01:14.070 ++ VERSION_CODENAME= 00:01:14.070 ++ PLATFORM_ID=platform:f38 00:01:14.070 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:14.070 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:14.070 ++ LOGO=fedora-logo-icon 00:01:14.070 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:14.070 ++ HOME_URL=https://fedoraproject.org/ 00:01:14.070 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:14.070 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:14.070 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:14.070 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:14.070 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:14.070 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:14.070 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:14.070 ++ SUPPORT_END=2024-05-14 00:01:14.070 ++ VARIANT='Cloud Edition' 00:01:14.070 ++ VARIANT_ID=cloud 00:01:14.070 + uname -a 00:01:14.070 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:14.070 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:14.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:14.588 Hugepages 00:01:14.588 node hugesize free / total 00:01:14.588 node0 1048576kB 0 / 0 00:01:14.588 node0 2048kB 0 / 0 00:01:14.588 00:01:14.588 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.588 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:14.588 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:14.588 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:14.588 + rm -f /tmp/spdk-ld-path 00:01:14.588 + source autorun-spdk.conf 00:01:14.588 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.588 ++ SPDK_RUN_ASAN=1 00:01:14.588 ++ SPDK_RUN_UBSAN=1 00:01:14.588 ++ SPDK_TEST_RAID=1 00:01:14.588 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.588 ++ RUN_NIGHTLY=0 00:01:14.588 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.588 + [[ -n '' ]] 00:01:14.588 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:14.588 + for M in /var/spdk/build-*-manifest.txt 00:01:14.588 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.588 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.588 + for M in /var/spdk/build-*-manifest.txt 00:01:14.588 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.588 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.588 ++ uname 00:01:14.588 + [[ Linux == \L\i\n\u\x ]] 00:01:14.588 + sudo dmesg -T 00:01:14.588 + sudo dmesg --clear 00:01:14.588 + dmesg_pid=5108 00:01:14.588 + sudo dmesg -Tw 00:01:14.588 + [[ Fedora Linux == FreeBSD ]] 00:01:14.588 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.588 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.588 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.588 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.589 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.589 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.589 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.589 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.589 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.589 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.589 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.589 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.589 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.589 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.589 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:14.589 Test configuration: 00:01:14.589 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.589 SPDK_RUN_ASAN=1 00:01:14.589 SPDK_RUN_UBSAN=1 00:01:14.589 SPDK_TEST_RAID=1 00:01:14.589 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.848 RUN_NIGHTLY=0 11:12:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:14.848 11:12:30 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.848 11:12:30 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.848 11:12:30 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.848 11:12:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.848 11:12:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.848 11:12:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.848 11:12:30 -- paths/export.sh@5 -- $ export PATH 00:01:14.848 11:12:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.848 11:12:30 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:14.848 11:12:30 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:14.848 11:12:30 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721905950.XXXXXX 00:01:14.848 11:12:30 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721905950.xEioue 00:01:14.848 11:12:30 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:14.848 11:12:30 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:14.848 11:12:30 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:14.848 11:12:30 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:14.848 11:12:30 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.848 11:12:30 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:14.848 11:12:30 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:14.848 11:12:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.848 11:12:30 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:14.848 11:12:30 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:14.848 11:12:30 -- pm/common@17 -- $ local monitor 00:01:14.848 11:12:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.848 11:12:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.848 11:12:30 -- pm/common@21 -- $ date +%s 00:01:14.848 11:12:30 -- pm/common@25 -- $ sleep 1 00:01:14.848 11:12:30 -- pm/common@21 -- $ date +%s 00:01:14.848 11:12:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721905950 00:01:14.848 11:12:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721905950 00:01:14.848 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721905950_collect-vmstat.pm.log 00:01:14.848 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721905950_collect-cpu-load.pm.log 00:01:15.783 11:12:31 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:15.783 11:12:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.783 11:12:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.783 11:12:31 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:15.783 11:12:31 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.783 Thu Jul 25 11:12:31 AM UTC 2024 00:01:15.783 11:12:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.783 v24.09-pre-322-g86fd5638b 00:01:15.783 11:12:31 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:15.783 11:12:31 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:15.783 11:12:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:15.783 11:12:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:15.783 11:12:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.783 ************************************ 00:01:15.783 START TEST asan 00:01:15.783 ************************************ 00:01:15.783 using asan 00:01:15.783 11:12:31 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:15.783 00:01:15.783 real 0m0.000s 00:01:15.783 user 0m0.000s 00:01:15.783 sys 0m0.000s 00:01:15.783 11:12:31 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:15.783 ************************************ 00:01:15.783 END TEST asan 00:01:15.783 ************************************ 00:01:15.783 11:12:31 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.783 11:12:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.783 11:12:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.783 11:12:31 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:15.783 11:12:31 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:15.783 11:12:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.783 ************************************ 00:01:15.783 START TEST ubsan 00:01:15.783 ************************************ 00:01:15.783 using ubsan 00:01:15.783 11:12:31 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:15.783 00:01:15.783 real 0m0.000s 00:01:15.783 user 0m0.000s 00:01:15.783 sys 0m0.000s 00:01:15.783 11:12:31 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:15.783 11:12:31 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.783 ************************************ 00:01:15.783 END TEST ubsan 00:01:15.783 ************************************ 00:01:15.783 11:12:31 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.783 11:12:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.783 11:12:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.783 11:12:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.783 11:12:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.783 11:12:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.783 11:12:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.783 11:12:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.783 11:12:31 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:16.041 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:16.041 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:16.607 Using 'verbs' RDMA provider 00:01:29.776 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:44.689 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:44.689 Creating mk/config.mk...done. 00:01:44.689 Creating mk/cc.flags.mk...done. 00:01:44.689 Type 'make' to build. 00:01:44.689 11:12:58 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:44.689 11:12:58 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:44.689 11:12:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:44.690 11:12:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.690 ************************************ 00:01:44.690 START TEST make 00:01:44.690 ************************************ 00:01:44.690 11:12:58 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:44.690 make[1]: Nothing to be done for 'all'. 00:01:54.659 The Meson build system 00:01:54.660 Version: 1.3.1 00:01:54.660 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:54.660 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:54.660 Build type: native build 00:01:54.660 Program cat found: YES (/usr/bin/cat) 00:01:54.660 Project name: DPDK 00:01:54.660 Project version: 24.03.0 00:01:54.660 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:54.660 C linker for the host machine: cc ld.bfd 2.39-16 00:01:54.660 Host machine cpu family: x86_64 00:01:54.660 Host machine cpu: x86_64 00:01:54.660 Message: ## Building in Developer Mode ## 00:01:54.660 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.660 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.660 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.660 Program python3 found: YES (/usr/bin/python3) 00:01:54.660 Program cat found: YES (/usr/bin/cat) 00:01:54.660 Compiler for C supports arguments -march=native: YES 00:01:54.660 Checking for size of "void *" : 8 00:01:54.660 Checking for size of "void *" : 8 (cached) 00:01:54.660 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:54.660 Library m found: YES 00:01:54.660 Library numa found: YES 00:01:54.660 Has header "numaif.h" : YES 00:01:54.660 Library fdt found: NO 00:01:54.660 Library execinfo found: NO 00:01:54.660 Has header "execinfo.h" : YES 00:01:54.660 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:54.660 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.660 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.660 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.660 Run-time dependency openssl found: YES 3.0.9 00:01:54.660 Run-time dependency libpcap found: YES 1.10.4 00:01:54.660 Has header "pcap.h" with dependency libpcap: YES 00:01:54.660 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.660 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.660 Compiler for C supports arguments -Wformat: YES 00:01:54.660 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.660 Compiler for C supports arguments -Wformat-security: NO 00:01:54.660 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.660 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.660 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.660 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.660 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.660 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.660 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.660 Compiler for C supports arguments -Wundef: YES 00:01:54.660 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.660 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.660 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.660 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.660 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.660 Program objdump found: YES (/usr/bin/objdump) 00:01:54.660 Compiler for C supports arguments -mavx512f: YES 00:01:54.660 Checking if "AVX512 checking" compiles: YES 00:01:54.660 Fetching value of define "__SSE4_2__" : 1 00:01:54.660 Fetching value of define "__AES__" : 1 00:01:54.660 Fetching value of define "__AVX__" : 1 00:01:54.660 Fetching value of define "__AVX2__" : 1 00:01:54.660 Fetching value of define "__AVX512BW__" : (undefined) 00:01:54.660 Fetching value of define "__AVX512CD__" : (undefined) 00:01:54.660 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:54.660 Fetching value of define "__AVX512F__" : (undefined) 00:01:54.660 Fetching value of define "__AVX512VL__" : (undefined) 00:01:54.660 Fetching value of define "__PCLMUL__" : 1 00:01:54.660 Fetching value of define "__RDRND__" : 1 00:01:54.660 Fetching value of define "__RDSEED__" : 1 00:01:54.660 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.660 Fetching value of define "__znver1__" : (undefined) 00:01:54.660 Fetching value of define "__znver2__" : (undefined) 00:01:54.660 Fetching value of define "__znver3__" : (undefined) 00:01:54.660 Fetching value of define "__znver4__" : (undefined) 00:01:54.660 Library asan found: YES 00:01:54.660 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.660 Message: lib/log: Defining dependency "log" 00:01:54.660 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.660 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.660 Library rt found: YES 00:01:54.660 Checking for function "getentropy" : NO 00:01:54.660 Message: lib/eal: Defining dependency "eal" 00:01:54.660 Message: lib/ring: Defining dependency "ring" 00:01:54.660 Message: lib/rcu: Defining dependency "rcu" 00:01:54.660 Message: lib/mempool: Defining dependency "mempool" 00:01:54.660 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.660 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.660 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.660 Compiler for C supports arguments -mpclmul: YES 00:01:54.660 Compiler for C supports arguments -maes: YES 00:01:54.660 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.660 Compiler for C supports arguments -mavx512bw: YES 00:01:54.660 Compiler for C supports arguments -mavx512dq: YES 00:01:54.660 Compiler for C supports arguments -mavx512vl: YES 00:01:54.660 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.660 Compiler for C supports arguments -mavx2: YES 00:01:54.660 Compiler for C supports arguments -mavx: YES 00:01:54.660 Message: lib/net: Defining dependency "net" 00:01:54.660 Message: lib/meter: Defining dependency "meter" 00:01:54.660 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.660 Message: lib/pci: Defining dependency "pci" 00:01:54.660 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.660 Message: lib/hash: Defining dependency "hash" 00:01:54.660 Message: lib/timer: Defining dependency "timer" 00:01:54.660 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.660 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.660 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.660 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.660 Message: lib/power: Defining dependency "power" 00:01:54.660 Message: lib/reorder: Defining dependency "reorder" 00:01:54.660 Message: lib/security: Defining dependency "security" 00:01:54.660 Has header "linux/userfaultfd.h" : YES 00:01:54.660 Has header "linux/vduse.h" : YES 00:01:54.660 Message: lib/vhost: Defining dependency "vhost" 00:01:54.660 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.660 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.660 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.660 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.660 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.660 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.660 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.660 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.660 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.660 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.660 Program doxygen found: YES (/usr/bin/doxygen) 00:01:54.660 Configuring doxy-api-html.conf using configuration 00:01:54.660 Configuring doxy-api-man.conf using configuration 00:01:54.660 Program mandb found: YES (/usr/bin/mandb) 00:01:54.660 Program sphinx-build found: NO 00:01:54.660 Configuring rte_build_config.h using configuration 00:01:54.660 Message: 00:01:54.660 ================= 00:01:54.660 Applications Enabled 00:01:54.660 ================= 00:01:54.660 00:01:54.660 apps: 00:01:54.660 00:01:54.660 00:01:54.660 Message: 00:01:54.660 ================= 00:01:54.660 Libraries Enabled 00:01:54.660 ================= 00:01:54.660 00:01:54.660 libs: 00:01:54.660 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.660 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.660 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.660 00:01:54.660 Message: 00:01:54.660 =============== 00:01:54.660 Drivers Enabled 00:01:54.660 =============== 00:01:54.660 00:01:54.660 common: 00:01:54.660 00:01:54.660 bus: 00:01:54.660 pci, vdev, 00:01:54.660 mempool: 00:01:54.660 ring, 00:01:54.660 dma: 00:01:54.660 00:01:54.660 net: 00:01:54.660 00:01:54.660 crypto: 00:01:54.660 00:01:54.660 compress: 00:01:54.660 00:01:54.660 vdpa: 00:01:54.660 00:01:54.660 00:01:54.660 Message: 00:01:54.660 ================= 00:01:54.660 Content Skipped 00:01:54.660 ================= 00:01:54.660 00:01:54.660 apps: 00:01:54.660 dumpcap: explicitly disabled via build config 00:01:54.660 graph: explicitly disabled via build config 00:01:54.660 pdump: explicitly disabled via build config 00:01:54.660 proc-info: explicitly disabled via build config 00:01:54.660 test-acl: explicitly disabled via build config 00:01:54.660 test-bbdev: explicitly disabled via build config 00:01:54.660 test-cmdline: explicitly disabled via build config 00:01:54.660 test-compress-perf: explicitly disabled via build config 00:01:54.660 test-crypto-perf: explicitly disabled via build config 00:01:54.660 test-dma-perf: explicitly disabled via build config 00:01:54.660 test-eventdev: explicitly disabled via build config 00:01:54.660 test-fib: explicitly disabled via build config 00:01:54.660 test-flow-perf: explicitly disabled via build config 00:01:54.660 test-gpudev: explicitly disabled via build config 00:01:54.660 test-mldev: explicitly disabled via build config 00:01:54.660 test-pipeline: explicitly disabled via build config 00:01:54.660 test-pmd: explicitly disabled via build config 00:01:54.660 test-regex: explicitly disabled via build config 00:01:54.660 test-sad: explicitly disabled via build config 00:01:54.660 test-security-perf: explicitly disabled via build config 00:01:54.660 00:01:54.660 libs: 00:01:54.661 argparse: explicitly disabled via build config 00:01:54.661 metrics: explicitly disabled via build config 00:01:54.661 acl: explicitly disabled via build config 00:01:54.661 bbdev: explicitly disabled via build config 00:01:54.661 bitratestats: explicitly disabled via build config 00:01:54.661 bpf: explicitly disabled via build config 00:01:54.661 cfgfile: explicitly disabled via build config 00:01:54.661 distributor: explicitly disabled via build config 00:01:54.661 efd: explicitly disabled via build config 00:01:54.661 eventdev: explicitly disabled via build config 00:01:54.661 dispatcher: explicitly disabled via build config 00:01:54.661 gpudev: explicitly disabled via build config 00:01:54.661 gro: explicitly disabled via build config 00:01:54.661 gso: explicitly disabled via build config 00:01:54.661 ip_frag: explicitly disabled via build config 00:01:54.661 jobstats: explicitly disabled via build config 00:01:54.661 latencystats: explicitly disabled via build config 00:01:54.661 lpm: explicitly disabled via build config 00:01:54.661 member: explicitly disabled via build config 00:01:54.661 pcapng: explicitly disabled via build config 00:01:54.661 rawdev: explicitly disabled via build config 00:01:54.661 regexdev: explicitly disabled via build config 00:01:54.661 mldev: explicitly disabled via build config 00:01:54.661 rib: explicitly disabled via build config 00:01:54.661 sched: explicitly disabled via build config 00:01:54.661 stack: explicitly disabled via build config 00:01:54.661 ipsec: explicitly disabled via build config 00:01:54.661 pdcp: explicitly disabled via build config 00:01:54.661 fib: explicitly disabled via build config 00:01:54.661 port: explicitly disabled via build config 00:01:54.661 pdump: explicitly disabled via build config 00:01:54.661 table: explicitly disabled via build config 00:01:54.661 pipeline: explicitly disabled via build config 00:01:54.661 graph: explicitly disabled via build config 00:01:54.661 node: explicitly disabled via build config 00:01:54.661 00:01:54.661 drivers: 00:01:54.661 common/cpt: not in enabled drivers build config 00:01:54.661 common/dpaax: not in enabled drivers build config 00:01:54.661 common/iavf: not in enabled drivers build config 00:01:54.661 common/idpf: not in enabled drivers build config 00:01:54.661 common/ionic: not in enabled drivers build config 00:01:54.661 common/mvep: not in enabled drivers build config 00:01:54.661 common/octeontx: not in enabled drivers build config 00:01:54.661 bus/auxiliary: not in enabled drivers build config 00:01:54.661 bus/cdx: not in enabled drivers build config 00:01:54.661 bus/dpaa: not in enabled drivers build config 00:01:54.661 bus/fslmc: not in enabled drivers build config 00:01:54.661 bus/ifpga: not in enabled drivers build config 00:01:54.661 bus/platform: not in enabled drivers build config 00:01:54.661 bus/uacce: not in enabled drivers build config 00:01:54.661 bus/vmbus: not in enabled drivers build config 00:01:54.661 common/cnxk: not in enabled drivers build config 00:01:54.661 common/mlx5: not in enabled drivers build config 00:01:54.661 common/nfp: not in enabled drivers build config 00:01:54.661 common/nitrox: not in enabled drivers build config 00:01:54.661 common/qat: not in enabled drivers build config 00:01:54.661 common/sfc_efx: not in enabled drivers build config 00:01:54.661 mempool/bucket: not in enabled drivers build config 00:01:54.661 mempool/cnxk: not in enabled drivers build config 00:01:54.661 mempool/dpaa: not in enabled drivers build config 00:01:54.661 mempool/dpaa2: not in enabled drivers build config 00:01:54.661 mempool/octeontx: not in enabled drivers build config 00:01:54.661 mempool/stack: not in enabled drivers build config 00:01:54.661 dma/cnxk: not in enabled drivers build config 00:01:54.661 dma/dpaa: not in enabled drivers build config 00:01:54.661 dma/dpaa2: not in enabled drivers build config 00:01:54.661 dma/hisilicon: not in enabled drivers build config 00:01:54.661 dma/idxd: not in enabled drivers build config 00:01:54.661 dma/ioat: not in enabled drivers build config 00:01:54.661 dma/skeleton: not in enabled drivers build config 00:01:54.661 net/af_packet: not in enabled drivers build config 00:01:54.661 net/af_xdp: not in enabled drivers build config 00:01:54.661 net/ark: not in enabled drivers build config 00:01:54.661 net/atlantic: not in enabled drivers build config 00:01:54.661 net/avp: not in enabled drivers build config 00:01:54.661 net/axgbe: not in enabled drivers build config 00:01:54.661 net/bnx2x: not in enabled drivers build config 00:01:54.661 net/bnxt: not in enabled drivers build config 00:01:54.661 net/bonding: not in enabled drivers build config 00:01:54.661 net/cnxk: not in enabled drivers build config 00:01:54.661 net/cpfl: not in enabled drivers build config 00:01:54.661 net/cxgbe: not in enabled drivers build config 00:01:54.661 net/dpaa: not in enabled drivers build config 00:01:54.661 net/dpaa2: not in enabled drivers build config 00:01:54.661 net/e1000: not in enabled drivers build config 00:01:54.661 net/ena: not in enabled drivers build config 00:01:54.661 net/enetc: not in enabled drivers build config 00:01:54.661 net/enetfec: not in enabled drivers build config 00:01:54.661 net/enic: not in enabled drivers build config 00:01:54.661 net/failsafe: not in enabled drivers build config 00:01:54.661 net/fm10k: not in enabled drivers build config 00:01:54.661 net/gve: not in enabled drivers build config 00:01:54.661 net/hinic: not in enabled drivers build config 00:01:54.661 net/hns3: not in enabled drivers build config 00:01:54.661 net/i40e: not in enabled drivers build config 00:01:54.661 net/iavf: not in enabled drivers build config 00:01:54.661 net/ice: not in enabled drivers build config 00:01:54.661 net/idpf: not in enabled drivers build config 00:01:54.661 net/igc: not in enabled drivers build config 00:01:54.661 net/ionic: not in enabled drivers build config 00:01:54.661 net/ipn3ke: not in enabled drivers build config 00:01:54.661 net/ixgbe: not in enabled drivers build config 00:01:54.661 net/mana: not in enabled drivers build config 00:01:54.661 net/memif: not in enabled drivers build config 00:01:54.661 net/mlx4: not in enabled drivers build config 00:01:54.661 net/mlx5: not in enabled drivers build config 00:01:54.661 net/mvneta: not in enabled drivers build config 00:01:54.661 net/mvpp2: not in enabled drivers build config 00:01:54.661 net/netvsc: not in enabled drivers build config 00:01:54.661 net/nfb: not in enabled drivers build config 00:01:54.661 net/nfp: not in enabled drivers build config 00:01:54.661 net/ngbe: not in enabled drivers build config 00:01:54.661 net/null: not in enabled drivers build config 00:01:54.661 net/octeontx: not in enabled drivers build config 00:01:54.661 net/octeon_ep: not in enabled drivers build config 00:01:54.661 net/pcap: not in enabled drivers build config 00:01:54.661 net/pfe: not in enabled drivers build config 00:01:54.661 net/qede: not in enabled drivers build config 00:01:54.661 net/ring: not in enabled drivers build config 00:01:54.661 net/sfc: not in enabled drivers build config 00:01:54.661 net/softnic: not in enabled drivers build config 00:01:54.661 net/tap: not in enabled drivers build config 00:01:54.661 net/thunderx: not in enabled drivers build config 00:01:54.661 net/txgbe: not in enabled drivers build config 00:01:54.661 net/vdev_netvsc: not in enabled drivers build config 00:01:54.661 net/vhost: not in enabled drivers build config 00:01:54.661 net/virtio: not in enabled drivers build config 00:01:54.661 net/vmxnet3: not in enabled drivers build config 00:01:54.661 raw/*: missing internal dependency, "rawdev" 00:01:54.661 crypto/armv8: not in enabled drivers build config 00:01:54.661 crypto/bcmfs: not in enabled drivers build config 00:01:54.661 crypto/caam_jr: not in enabled drivers build config 00:01:54.661 crypto/ccp: not in enabled drivers build config 00:01:54.661 crypto/cnxk: not in enabled drivers build config 00:01:54.661 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.661 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.661 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.661 crypto/mlx5: not in enabled drivers build config 00:01:54.661 crypto/mvsam: not in enabled drivers build config 00:01:54.661 crypto/nitrox: not in enabled drivers build config 00:01:54.661 crypto/null: not in enabled drivers build config 00:01:54.661 crypto/octeontx: not in enabled drivers build config 00:01:54.661 crypto/openssl: not in enabled drivers build config 00:01:54.661 crypto/scheduler: not in enabled drivers build config 00:01:54.661 crypto/uadk: not in enabled drivers build config 00:01:54.661 crypto/virtio: not in enabled drivers build config 00:01:54.661 compress/isal: not in enabled drivers build config 00:01:54.661 compress/mlx5: not in enabled drivers build config 00:01:54.661 compress/nitrox: not in enabled drivers build config 00:01:54.661 compress/octeontx: not in enabled drivers build config 00:01:54.661 compress/zlib: not in enabled drivers build config 00:01:54.661 regex/*: missing internal dependency, "regexdev" 00:01:54.661 ml/*: missing internal dependency, "mldev" 00:01:54.661 vdpa/ifc: not in enabled drivers build config 00:01:54.661 vdpa/mlx5: not in enabled drivers build config 00:01:54.661 vdpa/nfp: not in enabled drivers build config 00:01:54.661 vdpa/sfc: not in enabled drivers build config 00:01:54.661 event/*: missing internal dependency, "eventdev" 00:01:54.661 baseband/*: missing internal dependency, "bbdev" 00:01:54.661 gpu/*: missing internal dependency, "gpudev" 00:01:54.661 00:01:54.661 00:01:54.661 Build targets in project: 85 00:01:54.661 00:01:54.661 DPDK 24.03.0 00:01:54.661 00:01:54.661 User defined options 00:01:54.661 buildtype : debug 00:01:54.661 default_library : shared 00:01:54.661 libdir : lib 00:01:54.661 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:54.661 b_sanitize : address 00:01:54.661 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:54.661 c_link_args : 00:01:54.661 cpu_instruction_set: native 00:01:54.661 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:54.661 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:54.662 enable_docs : false 00:01:54.662 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:54.662 enable_kmods : false 00:01:54.662 max_lcores : 128 00:01:54.662 tests : false 00:01:54.662 00:01:54.662 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.662 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:54.662 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.662 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.662 [3/268] Linking static target lib/librte_kvargs.a 00:01:54.662 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.662 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.662 [6/268] Linking static target lib/librte_log.a 00:01:54.662 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.920 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.920 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:54.920 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:55.179 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.179 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.179 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.179 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.179 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.179 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.179 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.437 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.437 [19/268] Linking static target lib/librte_telemetry.a 00:01:55.437 [20/268] Linking target lib/librte_log.so.24.1 00:01:55.696 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.696 [22/268] Linking target lib/librte_kvargs.so.24.1 00:01:55.955 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.955 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:55.955 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:56.215 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:56.215 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:56.215 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:56.215 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.215 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:56.215 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:56.215 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:56.215 [33/268] Linking target lib/librte_telemetry.so.24.1 00:01:56.473 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:56.473 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:56.473 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:56.737 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:56.995 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:56.995 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.995 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:56.995 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:57.253 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:57.253 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:57.253 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:57.253 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.511 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.511 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:57.511 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:57.511 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:57.768 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:57.768 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:57.768 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.027 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.286 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.286 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.286 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.545 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.545 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.545 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.545 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.804 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.804 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.804 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:59.062 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:59.321 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:59.321 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:59.321 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:59.580 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:59.580 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:59.839 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:59.839 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:59.839 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:59.840 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:59.840 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:59.840 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:59.840 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:59.840 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:00.099 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.099 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:00.382 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.382 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:00.640 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.640 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.899 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:00.899 [85/268] Linking static target lib/librte_eal.a 00:02:01.158 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:01.158 [87/268] Linking static target lib/librte_ring.a 00:02:01.158 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:01.158 [89/268] Linking static target lib/librte_rcu.a 00:02:01.158 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.416 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.416 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.416 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.416 [94/268] Linking static target lib/librte_mempool.a 00:02:01.416 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.676 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.676 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.676 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.934 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.934 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:02.503 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:02.503 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:02.503 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:02.763 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.763 [105/268] Linking static target lib/librte_mbuf.a 00:02:02.763 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:02.763 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:02.763 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:02.763 [109/268] Linking static target lib/librte_net.a 00:02:02.763 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.022 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.023 [112/268] Linking static target lib/librte_meter.a 00:02:03.352 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.610 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.610 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:03.610 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:03.610 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:03.870 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:03.870 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.129 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:04.696 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:04.696 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:04.696 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:04.696 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:04.696 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:04.696 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:04.696 [127/268] Linking static target lib/librte_pci.a 00:02:04.954 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:04.954 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.212 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:05.212 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:05.212 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:05.212 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.212 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:05.470 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:05.470 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:05.470 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:05.470 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:05.470 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:05.470 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:05.728 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:05.728 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:05.728 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:05.728 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:05.728 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:05.728 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:05.728 [147/268] Linking static target lib/librte_cmdline.a 00:02:06.302 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:06.302 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:06.302 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:06.302 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:06.302 [152/268] Linking static target lib/librte_ethdev.a 00:02:06.302 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:06.302 [154/268] Linking static target lib/librte_timer.a 00:02:06.560 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:06.819 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:06.819 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:06.819 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.078 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.078 [160/268] Linking static target lib/librte_compressdev.a 00:02:07.078 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.078 [162/268] Linking static target lib/librte_hash.a 00:02:07.078 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.337 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:07.337 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.598 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.598 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.598 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:07.598 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:07.858 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.858 [171/268] Linking static target lib/librte_dmadev.a 00:02:07.858 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:07.858 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:08.116 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.375 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:08.375 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.375 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:08.375 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:08.634 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:08.634 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:08.634 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:08.893 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.893 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:08.893 [184/268] Linking static target lib/librte_cryptodev.a 00:02:09.151 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:09.151 [186/268] Linking static target lib/librte_power.a 00:02:09.409 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.409 [188/268] Linking static target lib/librte_reorder.a 00:02:09.409 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.409 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.667 [191/268] Linking static target lib/librte_security.a 00:02:09.667 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.667 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:09.925 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.183 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.183 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.183 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.441 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:10.699 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.699 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:10.958 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:10.958 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:10.958 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:10.958 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.216 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.475 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:11.476 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:11.476 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:11.476 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:11.476 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:11.476 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:11.741 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:11.741 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.741 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.741 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:11.741 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:11.741 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.741 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.741 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:12.002 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:12.002 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.002 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.002 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.002 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.002 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.002 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:12.260 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.827 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.086 [229/268] Linking target lib/librte_eal.so.24.1 00:02:13.086 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:13.086 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.086 [232/268] Linking target lib/librte_ring.so.24.1 00:02:13.086 [233/268] Linking target lib/librte_dmadev.so.24.1 00:02:13.086 [234/268] Linking target lib/librte_pci.so.24.1 00:02:13.086 [235/268] Linking target lib/librte_meter.so.24.1 00:02:13.086 [236/268] Linking target lib/librte_timer.so.24.1 00:02:13.086 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.345 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.345 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.345 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.345 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.345 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.345 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:13.345 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:13.345 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.603 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:13.603 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:13.603 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:13.603 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:13.862 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:13.862 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:13.862 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:13.862 [253/268] Linking target lib/librte_net.so.24.1 00:02:13.862 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:14.120 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.120 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.121 [257/268] Linking target lib/librte_security.so.24.1 00:02:14.121 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:14.121 [259/268] Linking target lib/librte_hash.so.24.1 00:02:14.380 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:14.380 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.638 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:14.638 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:14.897 [264/268] Linking target lib/librte_power.so.24.1 00:02:17.446 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.446 [266/268] Linking static target lib/librte_vhost.a 00:02:19.346 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.346 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:19.346 INFO: autodetecting backend as ninja 00:02:19.346 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:20.306 CC lib/ut_mock/mock.o 00:02:20.306 CC lib/log/log.o 00:02:20.306 CC lib/ut/ut.o 00:02:20.306 CC lib/log/log_flags.o 00:02:20.306 CC lib/log/log_deprecated.o 00:02:20.566 LIB libspdk_log.a 00:02:20.566 LIB libspdk_ut.a 00:02:20.566 LIB libspdk_ut_mock.a 00:02:20.566 SO libspdk_ut.so.2.0 00:02:20.566 SO libspdk_log.so.7.0 00:02:20.566 SO libspdk_ut_mock.so.6.0 00:02:20.824 SYMLINK libspdk_ut_mock.so 00:02:20.824 SYMLINK libspdk_ut.so 00:02:20.824 SYMLINK libspdk_log.so 00:02:21.082 CC lib/dma/dma.o 00:02:21.082 CC lib/util/base64.o 00:02:21.082 CC lib/ioat/ioat.o 00:02:21.082 CC lib/util/cpuset.o 00:02:21.082 CC lib/util/bit_array.o 00:02:21.082 CC lib/util/crc32.o 00:02:21.082 CC lib/util/crc16.o 00:02:21.082 CC lib/util/crc32c.o 00:02:21.082 CXX lib/trace_parser/trace.o 00:02:21.082 CC lib/vfio_user/host/vfio_user_pci.o 00:02:21.082 CC lib/util/crc32_ieee.o 00:02:21.082 CC lib/util/crc64.o 00:02:21.082 CC lib/util/dif.o 00:02:21.341 CC lib/util/fd.o 00:02:21.341 LIB libspdk_dma.a 00:02:21.341 CC lib/util/fd_group.o 00:02:21.341 SO libspdk_dma.so.4.0 00:02:21.341 CC lib/util/file.o 00:02:21.341 CC lib/util/hexlify.o 00:02:21.341 CC lib/util/iov.o 00:02:21.341 SYMLINK libspdk_dma.so 00:02:21.341 CC lib/util/math.o 00:02:21.341 LIB libspdk_ioat.a 00:02:21.341 CC lib/util/net.o 00:02:21.341 SO libspdk_ioat.so.7.0 00:02:21.600 CC lib/vfio_user/host/vfio_user.o 00:02:21.600 CC lib/util/pipe.o 00:02:21.600 CC lib/util/strerror_tls.o 00:02:21.600 SYMLINK libspdk_ioat.so 00:02:21.600 CC lib/util/string.o 00:02:21.600 CC lib/util/uuid.o 00:02:21.600 CC lib/util/xor.o 00:02:21.600 CC lib/util/zipf.o 00:02:21.600 LIB libspdk_vfio_user.a 00:02:21.859 SO libspdk_vfio_user.so.5.0 00:02:21.859 SYMLINK libspdk_vfio_user.so 00:02:21.859 LIB libspdk_util.a 00:02:22.118 SO libspdk_util.so.10.0 00:02:22.118 LIB libspdk_trace_parser.a 00:02:22.118 SYMLINK libspdk_util.so 00:02:22.118 SO libspdk_trace_parser.so.5.0 00:02:22.377 SYMLINK libspdk_trace_parser.so 00:02:22.377 CC lib/rdma_utils/rdma_utils.o 00:02:22.377 CC lib/env_dpdk/env.o 00:02:22.377 CC lib/conf/conf.o 00:02:22.377 CC lib/env_dpdk/memory.o 00:02:22.377 CC lib/env_dpdk/pci.o 00:02:22.377 CC lib/idxd/idxd.o 00:02:22.377 CC lib/env_dpdk/init.o 00:02:22.377 CC lib/json/json_parse.o 00:02:22.377 CC lib/rdma_provider/common.o 00:02:22.377 CC lib/vmd/vmd.o 00:02:22.636 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:22.636 LIB libspdk_conf.a 00:02:22.636 CC lib/json/json_util.o 00:02:22.636 SO libspdk_conf.so.6.0 00:02:22.636 LIB libspdk_rdma_utils.a 00:02:22.895 SO libspdk_rdma_utils.so.1.0 00:02:22.895 SYMLINK libspdk_conf.so 00:02:22.895 CC lib/json/json_write.o 00:02:22.895 SYMLINK libspdk_rdma_utils.so 00:02:22.895 CC lib/vmd/led.o 00:02:22.895 CC lib/env_dpdk/threads.o 00:02:22.895 CC lib/env_dpdk/pci_ioat.o 00:02:22.895 LIB libspdk_rdma_provider.a 00:02:22.895 SO libspdk_rdma_provider.so.6.0 00:02:22.895 CC lib/idxd/idxd_user.o 00:02:22.895 SYMLINK libspdk_rdma_provider.so 00:02:22.895 CC lib/idxd/idxd_kernel.o 00:02:22.895 CC lib/env_dpdk/pci_virtio.o 00:02:22.895 CC lib/env_dpdk/pci_vmd.o 00:02:23.154 CC lib/env_dpdk/pci_idxd.o 00:02:23.154 LIB libspdk_json.a 00:02:23.154 CC lib/env_dpdk/pci_event.o 00:02:23.154 CC lib/env_dpdk/sigbus_handler.o 00:02:23.154 CC lib/env_dpdk/pci_dpdk.o 00:02:23.154 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:23.154 SO libspdk_json.so.6.0 00:02:23.154 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.154 SYMLINK libspdk_json.so 00:02:23.154 LIB libspdk_idxd.a 00:02:23.412 LIB libspdk_vmd.a 00:02:23.412 SO libspdk_idxd.so.12.0 00:02:23.412 SO libspdk_vmd.so.6.0 00:02:23.412 SYMLINK libspdk_idxd.so 00:02:23.412 SYMLINK libspdk_vmd.so 00:02:23.412 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.412 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:23.412 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.412 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.671 LIB libspdk_jsonrpc.a 00:02:23.929 SO libspdk_jsonrpc.so.6.0 00:02:23.929 SYMLINK libspdk_jsonrpc.so 00:02:24.197 CC lib/rpc/rpc.o 00:02:24.471 LIB libspdk_env_dpdk.a 00:02:24.471 LIB libspdk_rpc.a 00:02:24.471 SO libspdk_rpc.so.6.0 00:02:24.471 SO libspdk_env_dpdk.so.15.0 00:02:24.471 SYMLINK libspdk_rpc.so 00:02:24.729 SYMLINK libspdk_env_dpdk.so 00:02:24.729 CC lib/notify/notify.o 00:02:24.729 CC lib/notify/notify_rpc.o 00:02:24.729 CC lib/trace/trace.o 00:02:24.729 CC lib/keyring/keyring.o 00:02:24.729 CC lib/trace/trace_rpc.o 00:02:24.729 CC lib/trace/trace_flags.o 00:02:24.729 CC lib/keyring/keyring_rpc.o 00:02:24.988 LIB libspdk_notify.a 00:02:24.988 SO libspdk_notify.so.6.0 00:02:24.988 LIB libspdk_keyring.a 00:02:24.988 LIB libspdk_trace.a 00:02:24.988 SO libspdk_keyring.so.1.0 00:02:24.988 SO libspdk_trace.so.10.0 00:02:24.988 SYMLINK libspdk_notify.so 00:02:25.246 SYMLINK libspdk_trace.so 00:02:25.246 SYMLINK libspdk_keyring.so 00:02:25.504 CC lib/sock/sock.o 00:02:25.505 CC lib/sock/sock_rpc.o 00:02:25.505 CC lib/thread/thread.o 00:02:25.505 CC lib/thread/iobuf.o 00:02:26.071 LIB libspdk_sock.a 00:02:26.071 SO libspdk_sock.so.10.0 00:02:26.071 SYMLINK libspdk_sock.so 00:02:26.330 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:26.330 CC lib/nvme/nvme_ctrlr.o 00:02:26.330 CC lib/nvme/nvme_fabric.o 00:02:26.330 CC lib/nvme/nvme_ns_cmd.o 00:02:26.330 CC lib/nvme/nvme_ns.o 00:02:26.330 CC lib/nvme/nvme_pcie_common.o 00:02:26.330 CC lib/nvme/nvme_pcie.o 00:02:26.330 CC lib/nvme/nvme_qpair.o 00:02:26.330 CC lib/nvme/nvme.o 00:02:27.266 CC lib/nvme/nvme_quirks.o 00:02:27.266 CC lib/nvme/nvme_transport.o 00:02:27.266 CC lib/nvme/nvme_discovery.o 00:02:27.266 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:27.524 LIB libspdk_thread.a 00:02:27.524 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:27.524 CC lib/nvme/nvme_tcp.o 00:02:27.524 SO libspdk_thread.so.10.1 00:02:27.524 CC lib/nvme/nvme_opal.o 00:02:27.524 SYMLINK libspdk_thread.so 00:02:27.524 CC lib/nvme/nvme_io_msg.o 00:02:27.781 CC lib/nvme/nvme_poll_group.o 00:02:27.781 CC lib/nvme/nvme_zns.o 00:02:28.039 CC lib/nvme/nvme_stubs.o 00:02:28.039 CC lib/nvme/nvme_auth.o 00:02:28.298 CC lib/nvme/nvme_cuse.o 00:02:28.298 CC lib/nvme/nvme_rdma.o 00:02:28.298 CC lib/accel/accel.o 00:02:28.556 CC lib/accel/accel_rpc.o 00:02:28.556 CC lib/blob/blobstore.o 00:02:28.556 CC lib/accel/accel_sw.o 00:02:28.814 CC lib/init/json_config.o 00:02:28.814 CC lib/virtio/virtio.o 00:02:28.814 CC lib/virtio/virtio_vhost_user.o 00:02:29.072 CC lib/init/subsystem.o 00:02:29.072 CC lib/init/subsystem_rpc.o 00:02:29.072 CC lib/init/rpc.o 00:02:29.346 CC lib/virtio/virtio_vfio_user.o 00:02:29.346 CC lib/virtio/virtio_pci.o 00:02:29.346 CC lib/blob/request.o 00:02:29.346 CC lib/blob/zeroes.o 00:02:29.346 LIB libspdk_init.a 00:02:29.346 CC lib/blob/blob_bs_dev.o 00:02:29.346 SO libspdk_init.so.5.0 00:02:29.604 SYMLINK libspdk_init.so 00:02:29.604 LIB libspdk_accel.a 00:02:29.604 LIB libspdk_virtio.a 00:02:29.604 SO libspdk_accel.so.16.0 00:02:29.604 SO libspdk_virtio.so.7.0 00:02:29.863 CC lib/event/app.o 00:02:29.863 CC lib/event/reactor.o 00:02:29.863 CC lib/event/app_rpc.o 00:02:29.863 CC lib/event/log_rpc.o 00:02:29.863 CC lib/event/scheduler_static.o 00:02:29.863 SYMLINK libspdk_accel.so 00:02:29.863 SYMLINK libspdk_virtio.so 00:02:30.120 CC lib/bdev/bdev.o 00:02:30.120 CC lib/bdev/bdev_rpc.o 00:02:30.120 CC lib/bdev/bdev_zone.o 00:02:30.121 CC lib/bdev/part.o 00:02:30.121 CC lib/bdev/scsi_nvme.o 00:02:30.121 LIB libspdk_nvme.a 00:02:30.379 SO libspdk_nvme.so.13.1 00:02:30.379 LIB libspdk_event.a 00:02:30.379 SO libspdk_event.so.14.0 00:02:30.379 SYMLINK libspdk_event.so 00:02:30.636 SYMLINK libspdk_nvme.so 00:02:33.165 LIB libspdk_blob.a 00:02:33.165 SO libspdk_blob.so.11.0 00:02:33.165 SYMLINK libspdk_blob.so 00:02:33.424 CC lib/blobfs/blobfs.o 00:02:33.424 CC lib/blobfs/tree.o 00:02:33.424 CC lib/lvol/lvol.o 00:02:33.683 LIB libspdk_bdev.a 00:02:33.683 SO libspdk_bdev.so.16.0 00:02:33.941 SYMLINK libspdk_bdev.so 00:02:33.941 CC lib/scsi/dev.o 00:02:33.941 CC lib/scsi/lun.o 00:02:33.941 CC lib/scsi/scsi.o 00:02:34.200 CC lib/scsi/port.o 00:02:34.200 CC lib/nvmf/ctrlr.o 00:02:34.200 CC lib/ftl/ftl_core.o 00:02:34.200 CC lib/ublk/ublk.o 00:02:34.200 CC lib/nbd/nbd.o 00:02:34.200 CC lib/nbd/nbd_rpc.o 00:02:34.200 CC lib/scsi/scsi_bdev.o 00:02:34.458 CC lib/scsi/scsi_pr.o 00:02:34.458 LIB libspdk_blobfs.a 00:02:34.458 CC lib/scsi/scsi_rpc.o 00:02:34.458 SO libspdk_blobfs.so.10.0 00:02:34.458 LIB libspdk_lvol.a 00:02:34.458 SO libspdk_lvol.so.10.0 00:02:34.458 CC lib/scsi/task.o 00:02:34.458 SYMLINK libspdk_blobfs.so 00:02:34.458 CC lib/ftl/ftl_init.o 00:02:34.716 CC lib/ftl/ftl_layout.o 00:02:34.716 SYMLINK libspdk_lvol.so 00:02:34.716 CC lib/ftl/ftl_debug.o 00:02:34.716 LIB libspdk_nbd.a 00:02:34.716 CC lib/ftl/ftl_io.o 00:02:34.716 SO libspdk_nbd.so.7.0 00:02:34.716 SYMLINK libspdk_nbd.so 00:02:34.716 CC lib/ftl/ftl_sb.o 00:02:34.716 CC lib/ftl/ftl_l2p.o 00:02:34.716 CC lib/ftl/ftl_l2p_flat.o 00:02:34.716 CC lib/ftl/ftl_nv_cache.o 00:02:34.975 CC lib/ftl/ftl_band.o 00:02:34.975 LIB libspdk_scsi.a 00:02:34.975 CC lib/ftl/ftl_band_ops.o 00:02:34.975 SO libspdk_scsi.so.9.0 00:02:34.975 CC lib/ublk/ublk_rpc.o 00:02:34.975 CC lib/nvmf/ctrlr_discovery.o 00:02:34.975 CC lib/ftl/ftl_writer.o 00:02:34.975 CC lib/ftl/ftl_rq.o 00:02:34.975 CC lib/ftl/ftl_reloc.o 00:02:35.233 SYMLINK libspdk_scsi.so 00:02:35.233 CC lib/ftl/ftl_l2p_cache.o 00:02:35.233 LIB libspdk_ublk.a 00:02:35.233 SO libspdk_ublk.so.3.0 00:02:35.233 SYMLINK libspdk_ublk.so 00:02:35.233 CC lib/ftl/ftl_p2l.o 00:02:35.492 CC lib/ftl/mngt/ftl_mngt.o 00:02:35.492 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:35.492 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:35.492 CC lib/iscsi/conn.o 00:02:35.492 CC lib/vhost/vhost.o 00:02:35.492 CC lib/vhost/vhost_rpc.o 00:02:35.828 CC lib/nvmf/ctrlr_bdev.o 00:02:35.828 CC lib/nvmf/subsystem.o 00:02:35.828 CC lib/vhost/vhost_scsi.o 00:02:35.828 CC lib/iscsi/init_grp.o 00:02:36.086 CC lib/nvmf/nvmf.o 00:02:36.086 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:36.086 CC lib/iscsi/iscsi.o 00:02:36.345 CC lib/vhost/vhost_blk.o 00:02:36.345 CC lib/vhost/rte_vhost_user.o 00:02:36.345 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:36.345 CC lib/nvmf/nvmf_rpc.o 00:02:36.345 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:36.604 CC lib/nvmf/transport.o 00:02:36.604 CC lib/nvmf/tcp.o 00:02:36.862 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:36.862 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:37.121 CC lib/nvmf/stubs.o 00:02:37.121 CC lib/nvmf/mdns_server.o 00:02:37.121 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:37.121 CC lib/nvmf/rdma.o 00:02:37.380 CC lib/nvmf/auth.o 00:02:37.380 CC lib/iscsi/md5.o 00:02:37.380 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:37.380 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:37.380 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:37.380 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:37.638 LIB libspdk_vhost.a 00:02:37.638 CC lib/ftl/utils/ftl_conf.o 00:02:37.638 SO libspdk_vhost.so.8.0 00:02:37.638 CC lib/iscsi/param.o 00:02:37.638 CC lib/ftl/utils/ftl_md.o 00:02:37.638 CC lib/iscsi/portal_grp.o 00:02:37.638 SYMLINK libspdk_vhost.so 00:02:37.901 CC lib/iscsi/tgt_node.o 00:02:37.901 CC lib/ftl/utils/ftl_mempool.o 00:02:37.901 CC lib/ftl/utils/ftl_bitmap.o 00:02:38.159 CC lib/iscsi/iscsi_subsystem.o 00:02:38.159 CC lib/iscsi/iscsi_rpc.o 00:02:38.159 CC lib/iscsi/task.o 00:02:38.159 CC lib/ftl/utils/ftl_property.o 00:02:38.159 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:38.159 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:38.159 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:38.417 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:38.417 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:38.417 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:38.417 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:38.417 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:38.417 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:38.417 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:38.675 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:38.675 LIB libspdk_iscsi.a 00:02:38.675 CC lib/ftl/base/ftl_base_dev.o 00:02:38.675 CC lib/ftl/base/ftl_base_bdev.o 00:02:38.675 CC lib/ftl/ftl_trace.o 00:02:38.675 SO libspdk_iscsi.so.8.0 00:02:38.934 SYMLINK libspdk_iscsi.so 00:02:38.934 LIB libspdk_ftl.a 00:02:39.192 SO libspdk_ftl.so.9.0 00:02:39.757 SYMLINK libspdk_ftl.so 00:02:40.017 LIB libspdk_nvmf.a 00:02:40.277 SO libspdk_nvmf.so.19.0 00:02:40.536 SYMLINK libspdk_nvmf.so 00:02:40.794 CC module/env_dpdk/env_dpdk_rpc.o 00:02:41.053 CC module/blob/bdev/blob_bdev.o 00:02:41.053 CC module/sock/posix/posix.o 00:02:41.053 CC module/accel/ioat/accel_ioat.o 00:02:41.053 CC module/keyring/file/keyring.o 00:02:41.053 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:41.053 CC module/accel/dsa/accel_dsa.o 00:02:41.053 CC module/accel/iaa/accel_iaa.o 00:02:41.053 CC module/accel/error/accel_error.o 00:02:41.053 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:41.053 LIB libspdk_env_dpdk_rpc.a 00:02:41.053 SO libspdk_env_dpdk_rpc.so.6.0 00:02:41.053 SYMLINK libspdk_env_dpdk_rpc.so 00:02:41.312 CC module/accel/iaa/accel_iaa_rpc.o 00:02:41.312 CC module/keyring/file/keyring_rpc.o 00:02:41.312 CC module/accel/error/accel_error_rpc.o 00:02:41.312 CC module/accel/ioat/accel_ioat_rpc.o 00:02:41.312 CC module/accel/dsa/accel_dsa_rpc.o 00:02:41.312 LIB libspdk_scheduler_dpdk_governor.a 00:02:41.312 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:41.312 LIB libspdk_blob_bdev.a 00:02:41.312 SO libspdk_blob_bdev.so.11.0 00:02:41.312 LIB libspdk_accel_iaa.a 00:02:41.312 LIB libspdk_scheduler_dynamic.a 00:02:41.312 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:41.312 LIB libspdk_keyring_file.a 00:02:41.312 LIB libspdk_accel_ioat.a 00:02:41.312 LIB libspdk_accel_dsa.a 00:02:41.312 LIB libspdk_accel_error.a 00:02:41.312 SO libspdk_accel_iaa.so.3.0 00:02:41.312 SO libspdk_scheduler_dynamic.so.4.0 00:02:41.312 SO libspdk_keyring_file.so.1.0 00:02:41.312 SO libspdk_accel_error.so.2.0 00:02:41.312 SO libspdk_accel_ioat.so.6.0 00:02:41.312 SYMLINK libspdk_blob_bdev.so 00:02:41.312 SO libspdk_accel_dsa.so.5.0 00:02:41.571 SYMLINK libspdk_accel_iaa.so 00:02:41.571 SYMLINK libspdk_scheduler_dynamic.so 00:02:41.571 SYMLINK libspdk_accel_ioat.so 00:02:41.571 SYMLINK libspdk_keyring_file.so 00:02:41.571 SYMLINK libspdk_accel_error.so 00:02:41.571 SYMLINK libspdk_accel_dsa.so 00:02:41.571 CC module/scheduler/gscheduler/gscheduler.o 00:02:41.571 CC module/keyring/linux/keyring.o 00:02:41.571 CC module/keyring/linux/keyring_rpc.o 00:02:41.829 LIB libspdk_scheduler_gscheduler.a 00:02:41.829 LIB libspdk_keyring_linux.a 00:02:41.829 SO libspdk_scheduler_gscheduler.so.4.0 00:02:41.829 SO libspdk_keyring_linux.so.1.0 00:02:41.829 CC module/blobfs/bdev/blobfs_bdev.o 00:02:41.829 CC module/bdev/gpt/gpt.o 00:02:41.829 CC module/bdev/malloc/bdev_malloc.o 00:02:41.829 CC module/bdev/error/vbdev_error.o 00:02:41.829 CC module/bdev/lvol/vbdev_lvol.o 00:02:41.829 CC module/bdev/delay/vbdev_delay.o 00:02:41.829 SYMLINK libspdk_scheduler_gscheduler.so 00:02:41.829 SYMLINK libspdk_keyring_linux.so 00:02:41.829 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:41.829 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:41.829 CC module/bdev/null/bdev_null.o 00:02:42.087 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:42.087 CC module/bdev/gpt/vbdev_gpt.o 00:02:42.087 CC module/bdev/null/bdev_null_rpc.o 00:02:42.087 CC module/bdev/error/vbdev_error_rpc.o 00:02:42.087 LIB libspdk_sock_posix.a 00:02:42.087 SO libspdk_sock_posix.so.6.0 00:02:42.087 LIB libspdk_blobfs_bdev.a 00:02:42.087 SO libspdk_blobfs_bdev.so.6.0 00:02:42.346 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:42.346 SYMLINK libspdk_sock_posix.so 00:02:42.346 LIB libspdk_bdev_error.a 00:02:42.346 LIB libspdk_bdev_malloc.a 00:02:42.346 LIB libspdk_bdev_null.a 00:02:42.346 LIB libspdk_bdev_delay.a 00:02:42.346 SO libspdk_bdev_error.so.6.0 00:02:42.346 SYMLINK libspdk_blobfs_bdev.so 00:02:42.346 SO libspdk_bdev_malloc.so.6.0 00:02:42.346 SO libspdk_bdev_null.so.6.0 00:02:42.346 SO libspdk_bdev_delay.so.6.0 00:02:42.346 CC module/bdev/nvme/bdev_nvme.o 00:02:42.346 LIB libspdk_bdev_gpt.a 00:02:42.346 SYMLINK libspdk_bdev_error.so 00:02:42.346 SYMLINK libspdk_bdev_malloc.so 00:02:42.346 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:42.346 SO libspdk_bdev_gpt.so.6.0 00:02:42.346 SYMLINK libspdk_bdev_null.so 00:02:42.346 SYMLINK libspdk_bdev_delay.so 00:02:42.346 CC module/bdev/nvme/nvme_rpc.o 00:02:42.346 CC module/bdev/passthru/vbdev_passthru.o 00:02:42.346 CC module/bdev/nvme/bdev_mdns_client.o 00:02:42.346 SYMLINK libspdk_bdev_gpt.so 00:02:42.604 CC module/bdev/raid/bdev_raid.o 00:02:42.604 CC module/bdev/split/vbdev_split.o 00:02:42.604 CC module/bdev/split/vbdev_split_rpc.o 00:02:42.604 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:42.604 LIB libspdk_bdev_lvol.a 00:02:42.604 CC module/bdev/aio/bdev_aio.o 00:02:42.604 SO libspdk_bdev_lvol.so.6.0 00:02:42.604 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:42.862 CC module/bdev/nvme/vbdev_opal.o 00:02:42.862 CC module/bdev/aio/bdev_aio_rpc.o 00:02:42.862 SYMLINK libspdk_bdev_lvol.so 00:02:42.862 LIB libspdk_bdev_split.a 00:02:42.862 SO libspdk_bdev_split.so.6.0 00:02:42.862 LIB libspdk_bdev_passthru.a 00:02:42.862 SO libspdk_bdev_passthru.so.6.0 00:02:42.862 CC module/bdev/ftl/bdev_ftl.o 00:02:42.862 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:43.121 SYMLINK libspdk_bdev_split.so 00:02:43.121 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:43.121 SYMLINK libspdk_bdev_passthru.so 00:02:43.121 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:43.121 LIB libspdk_bdev_aio.a 00:02:43.121 SO libspdk_bdev_aio.so.6.0 00:02:43.121 SYMLINK libspdk_bdev_aio.so 00:02:43.121 CC module/bdev/raid/bdev_raid_rpc.o 00:02:43.121 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:43.121 LIB libspdk_bdev_zone_block.a 00:02:43.121 CC module/bdev/raid/bdev_raid_sb.o 00:02:43.121 CC module/bdev/raid/raid0.o 00:02:43.380 CC module/bdev/iscsi/bdev_iscsi.o 00:02:43.380 SO libspdk_bdev_zone_block.so.6.0 00:02:43.380 LIB libspdk_bdev_ftl.a 00:02:43.380 SO libspdk_bdev_ftl.so.6.0 00:02:43.380 SYMLINK libspdk_bdev_zone_block.so 00:02:43.380 CC module/bdev/raid/raid1.o 00:02:43.380 CC module/bdev/raid/concat.o 00:02:43.380 SYMLINK libspdk_bdev_ftl.so 00:02:43.380 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:43.380 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:43.380 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:43.638 CC module/bdev/raid/raid5f.o 00:02:43.638 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:43.896 LIB libspdk_bdev_iscsi.a 00:02:43.896 SO libspdk_bdev_iscsi.so.6.0 00:02:43.896 SYMLINK libspdk_bdev_iscsi.so 00:02:44.155 LIB libspdk_bdev_virtio.a 00:02:44.155 SO libspdk_bdev_virtio.so.6.0 00:02:44.155 LIB libspdk_bdev_raid.a 00:02:44.155 SYMLINK libspdk_bdev_virtio.so 00:02:44.414 SO libspdk_bdev_raid.so.6.0 00:02:44.414 SYMLINK libspdk_bdev_raid.so 00:02:45.350 LIB libspdk_bdev_nvme.a 00:02:45.351 SO libspdk_bdev_nvme.so.7.0 00:02:45.609 SYMLINK libspdk_bdev_nvme.so 00:02:46.176 CC module/event/subsystems/vmd/vmd.o 00:02:46.176 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:46.176 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:46.176 CC module/event/subsystems/sock/sock.o 00:02:46.176 CC module/event/subsystems/iobuf/iobuf.o 00:02:46.176 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:46.176 CC module/event/subsystems/keyring/keyring.o 00:02:46.176 CC module/event/subsystems/scheduler/scheduler.o 00:02:46.176 LIB libspdk_event_vhost_blk.a 00:02:46.176 LIB libspdk_event_vmd.a 00:02:46.176 LIB libspdk_event_keyring.a 00:02:46.176 SO libspdk_event_vhost_blk.so.3.0 00:02:46.176 SO libspdk_event_vmd.so.6.0 00:02:46.176 LIB libspdk_event_sock.a 00:02:46.176 LIB libspdk_event_iobuf.a 00:02:46.176 LIB libspdk_event_scheduler.a 00:02:46.176 SO libspdk_event_keyring.so.1.0 00:02:46.176 SO libspdk_event_sock.so.5.0 00:02:46.176 SO libspdk_event_iobuf.so.3.0 00:02:46.176 SYMLINK libspdk_event_vhost_blk.so 00:02:46.435 SO libspdk_event_scheduler.so.4.0 00:02:46.435 SYMLINK libspdk_event_vmd.so 00:02:46.435 SYMLINK libspdk_event_keyring.so 00:02:46.435 SYMLINK libspdk_event_sock.so 00:02:46.435 SYMLINK libspdk_event_iobuf.so 00:02:46.435 SYMLINK libspdk_event_scheduler.so 00:02:46.757 CC module/event/subsystems/accel/accel.o 00:02:46.757 LIB libspdk_event_accel.a 00:02:46.757 SO libspdk_event_accel.so.6.0 00:02:47.016 SYMLINK libspdk_event_accel.so 00:02:47.274 CC module/event/subsystems/bdev/bdev.o 00:02:47.274 LIB libspdk_event_bdev.a 00:02:47.532 SO libspdk_event_bdev.so.6.0 00:02:47.532 SYMLINK libspdk_event_bdev.so 00:02:47.790 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:47.790 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:47.791 CC module/event/subsystems/nbd/nbd.o 00:02:47.791 CC module/event/subsystems/scsi/scsi.o 00:02:47.791 CC module/event/subsystems/ublk/ublk.o 00:02:48.062 LIB libspdk_event_nbd.a 00:02:48.062 LIB libspdk_event_ublk.a 00:02:48.062 SO libspdk_event_nbd.so.6.0 00:02:48.062 LIB libspdk_event_scsi.a 00:02:48.062 SO libspdk_event_ublk.so.3.0 00:02:48.062 SO libspdk_event_scsi.so.6.0 00:02:48.062 SYMLINK libspdk_event_nbd.so 00:02:48.062 SYMLINK libspdk_event_ublk.so 00:02:48.062 LIB libspdk_event_nvmf.a 00:02:48.062 SYMLINK libspdk_event_scsi.so 00:02:48.062 SO libspdk_event_nvmf.so.6.0 00:02:48.354 SYMLINK libspdk_event_nvmf.so 00:02:48.354 CC module/event/subsystems/iscsi/iscsi.o 00:02:48.354 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:48.354 LIB libspdk_event_vhost_scsi.a 00:02:48.613 LIB libspdk_event_iscsi.a 00:02:48.613 SO libspdk_event_vhost_scsi.so.3.0 00:02:48.613 SO libspdk_event_iscsi.so.6.0 00:02:48.613 SYMLINK libspdk_event_vhost_scsi.so 00:02:48.613 SYMLINK libspdk_event_iscsi.so 00:02:48.871 SO libspdk.so.6.0 00:02:48.871 SYMLINK libspdk.so 00:02:48.871 CC app/spdk_lspci/spdk_lspci.o 00:02:49.129 CXX app/trace/trace.o 00:02:49.129 CC app/trace_record/trace_record.o 00:02:49.129 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:49.129 CC app/nvmf_tgt/nvmf_main.o 00:02:49.129 CC app/iscsi_tgt/iscsi_tgt.o 00:02:49.129 CC examples/util/zipf/zipf.o 00:02:49.129 CC test/thread/poller_perf/poller_perf.o 00:02:49.129 CC examples/ioat/perf/perf.o 00:02:49.129 CC app/spdk_tgt/spdk_tgt.o 00:02:49.129 LINK spdk_lspci 00:02:49.387 LINK nvmf_tgt 00:02:49.387 LINK iscsi_tgt 00:02:49.387 LINK poller_perf 00:02:49.387 LINK zipf 00:02:49.387 LINK interrupt_tgt 00:02:49.387 LINK spdk_trace_record 00:02:49.387 LINK spdk_tgt 00:02:49.645 CC app/spdk_nvme_perf/perf.o 00:02:49.645 LINK ioat_perf 00:02:49.645 LINK spdk_trace 00:02:49.645 CC app/spdk_nvme_identify/identify.o 00:02:49.645 TEST_HEADER include/spdk/accel.h 00:02:49.645 TEST_HEADER include/spdk/accel_module.h 00:02:49.645 TEST_HEADER include/spdk/assert.h 00:02:49.645 TEST_HEADER include/spdk/barrier.h 00:02:49.645 TEST_HEADER include/spdk/base64.h 00:02:49.645 TEST_HEADER include/spdk/bdev.h 00:02:49.645 TEST_HEADER include/spdk/bdev_module.h 00:02:49.645 TEST_HEADER include/spdk/bdev_zone.h 00:02:49.645 TEST_HEADER include/spdk/bit_array.h 00:02:49.645 TEST_HEADER include/spdk/bit_pool.h 00:02:49.645 TEST_HEADER include/spdk/blob_bdev.h 00:02:49.645 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:49.645 TEST_HEADER include/spdk/blobfs.h 00:02:49.645 TEST_HEADER include/spdk/blob.h 00:02:49.645 TEST_HEADER include/spdk/conf.h 00:02:49.645 TEST_HEADER include/spdk/config.h 00:02:49.645 TEST_HEADER include/spdk/cpuset.h 00:02:49.645 CC app/spdk_nvme_discover/discovery_aer.o 00:02:49.645 TEST_HEADER include/spdk/crc16.h 00:02:49.645 TEST_HEADER include/spdk/crc32.h 00:02:49.645 TEST_HEADER include/spdk/crc64.h 00:02:49.645 TEST_HEADER include/spdk/dif.h 00:02:49.645 TEST_HEADER include/spdk/dma.h 00:02:49.645 CC test/dma/test_dma/test_dma.o 00:02:49.903 TEST_HEADER include/spdk/endian.h 00:02:49.903 TEST_HEADER include/spdk/env_dpdk.h 00:02:49.903 TEST_HEADER include/spdk/env.h 00:02:49.903 TEST_HEADER include/spdk/event.h 00:02:49.903 TEST_HEADER include/spdk/fd_group.h 00:02:49.903 TEST_HEADER include/spdk/fd.h 00:02:49.903 TEST_HEADER include/spdk/file.h 00:02:49.903 TEST_HEADER include/spdk/ftl.h 00:02:49.903 TEST_HEADER include/spdk/gpt_spec.h 00:02:49.903 TEST_HEADER include/spdk/hexlify.h 00:02:49.903 TEST_HEADER include/spdk/histogram_data.h 00:02:49.903 TEST_HEADER include/spdk/idxd.h 00:02:49.903 CC examples/ioat/verify/verify.o 00:02:49.903 TEST_HEADER include/spdk/idxd_spec.h 00:02:49.903 TEST_HEADER include/spdk/init.h 00:02:49.903 TEST_HEADER include/spdk/ioat.h 00:02:49.903 TEST_HEADER include/spdk/ioat_spec.h 00:02:49.903 TEST_HEADER include/spdk/iscsi_spec.h 00:02:49.903 TEST_HEADER include/spdk/json.h 00:02:49.903 TEST_HEADER include/spdk/jsonrpc.h 00:02:49.903 TEST_HEADER include/spdk/keyring.h 00:02:49.903 TEST_HEADER include/spdk/keyring_module.h 00:02:49.903 TEST_HEADER include/spdk/likely.h 00:02:49.903 TEST_HEADER include/spdk/log.h 00:02:49.903 TEST_HEADER include/spdk/lvol.h 00:02:49.903 TEST_HEADER include/spdk/memory.h 00:02:49.903 CC test/app/bdev_svc/bdev_svc.o 00:02:49.903 TEST_HEADER include/spdk/mmio.h 00:02:49.903 TEST_HEADER include/spdk/nbd.h 00:02:49.903 TEST_HEADER include/spdk/net.h 00:02:49.903 TEST_HEADER include/spdk/notify.h 00:02:49.903 TEST_HEADER include/spdk/nvme.h 00:02:49.903 TEST_HEADER include/spdk/nvme_intel.h 00:02:49.903 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:49.903 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:49.903 TEST_HEADER include/spdk/nvme_spec.h 00:02:49.903 CC test/env/vtophys/vtophys.o 00:02:49.903 TEST_HEADER include/spdk/nvme_zns.h 00:02:49.903 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:49.903 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:49.903 TEST_HEADER include/spdk/nvmf.h 00:02:49.903 TEST_HEADER include/spdk/nvmf_spec.h 00:02:49.903 TEST_HEADER include/spdk/nvmf_transport.h 00:02:49.903 TEST_HEADER include/spdk/opal.h 00:02:49.903 TEST_HEADER include/spdk/opal_spec.h 00:02:49.903 TEST_HEADER include/spdk/pci_ids.h 00:02:49.903 TEST_HEADER include/spdk/pipe.h 00:02:49.903 TEST_HEADER include/spdk/queue.h 00:02:49.903 TEST_HEADER include/spdk/reduce.h 00:02:49.903 TEST_HEADER include/spdk/rpc.h 00:02:49.903 TEST_HEADER include/spdk/scheduler.h 00:02:49.903 TEST_HEADER include/spdk/scsi.h 00:02:49.903 TEST_HEADER include/spdk/scsi_spec.h 00:02:49.903 TEST_HEADER include/spdk/sock.h 00:02:49.903 TEST_HEADER include/spdk/stdinc.h 00:02:49.903 TEST_HEADER include/spdk/string.h 00:02:49.903 TEST_HEADER include/spdk/thread.h 00:02:49.903 TEST_HEADER include/spdk/trace.h 00:02:49.903 TEST_HEADER include/spdk/trace_parser.h 00:02:49.903 TEST_HEADER include/spdk/tree.h 00:02:49.903 TEST_HEADER include/spdk/ublk.h 00:02:49.903 TEST_HEADER include/spdk/util.h 00:02:49.903 TEST_HEADER include/spdk/uuid.h 00:02:49.903 TEST_HEADER include/spdk/version.h 00:02:49.903 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:49.903 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:49.903 CC test/env/mem_callbacks/mem_callbacks.o 00:02:49.903 TEST_HEADER include/spdk/vhost.h 00:02:49.903 TEST_HEADER include/spdk/vmd.h 00:02:49.903 TEST_HEADER include/spdk/xor.h 00:02:49.903 TEST_HEADER include/spdk/zipf.h 00:02:49.903 CXX test/cpp_headers/accel.o 00:02:49.903 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:50.162 LINK spdk_nvme_discover 00:02:50.162 LINK bdev_svc 00:02:50.162 LINK verify 00:02:50.162 LINK vtophys 00:02:50.162 CXX test/cpp_headers/accel_module.o 00:02:50.162 LINK env_dpdk_post_init 00:02:50.421 CXX test/cpp_headers/assert.o 00:02:50.421 CC examples/sock/hello_world/hello_sock.o 00:02:50.421 LINK test_dma 00:02:50.421 CC examples/thread/thread/thread_ex.o 00:02:50.421 CC test/env/memory/memory_ut.o 00:02:50.421 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:50.421 CC test/env/pci/pci_ut.o 00:02:50.421 CXX test/cpp_headers/barrier.o 00:02:50.679 LINK spdk_nvme_perf 00:02:50.679 CXX test/cpp_headers/base64.o 00:02:50.679 LINK thread 00:02:50.679 LINK hello_sock 00:02:50.937 CXX test/cpp_headers/bdev.o 00:02:50.937 CC test/event/event_perf/event_perf.o 00:02:50.937 CC test/event/reactor/reactor.o 00:02:50.937 LINK mem_callbacks 00:02:50.937 LINK spdk_nvme_identify 00:02:51.196 CC app/spdk_top/spdk_top.o 00:02:51.196 LINK event_perf 00:02:51.196 CXX test/cpp_headers/bdev_module.o 00:02:51.196 LINK reactor 00:02:51.196 CC examples/vmd/lsvmd/lsvmd.o 00:02:51.196 CC app/vhost/vhost.o 00:02:51.196 LINK lsvmd 00:02:51.455 CC test/event/reactor_perf/reactor_perf.o 00:02:51.455 LINK pci_ut 00:02:51.455 CXX test/cpp_headers/bdev_zone.o 00:02:51.455 CC test/event/app_repeat/app_repeat.o 00:02:51.455 LINK nvme_fuzz 00:02:51.455 CC test/event/scheduler/scheduler.o 00:02:51.455 LINK reactor_perf 00:02:51.455 LINK vhost 00:02:51.713 CXX test/cpp_headers/bit_array.o 00:02:51.713 LINK app_repeat 00:02:51.713 CC examples/vmd/led/led.o 00:02:51.713 LINK memory_ut 00:02:51.972 CXX test/cpp_headers/bit_pool.o 00:02:51.972 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:51.972 LINK scheduler 00:02:51.972 CC app/spdk_dd/spdk_dd.o 00:02:51.972 LINK led 00:02:51.972 CC test/app/histogram_perf/histogram_perf.o 00:02:51.972 CXX test/cpp_headers/blob_bdev.o 00:02:51.972 CC app/fio/nvme/fio_plugin.o 00:02:52.260 CC test/nvme/aer/aer.o 00:02:52.260 CC test/rpc_client/rpc_client_test.o 00:02:52.260 LINK histogram_perf 00:02:52.517 CC examples/idxd/perf/perf.o 00:02:52.517 LINK rpc_client_test 00:02:52.517 CXX test/cpp_headers/blobfs_bdev.o 00:02:52.517 LINK spdk_top 00:02:52.517 CC examples/accel/perf/accel_perf.o 00:02:52.517 LINK spdk_dd 00:02:52.517 LINK aer 00:02:52.517 CC test/app/jsoncat/jsoncat.o 00:02:52.775 CC test/app/stub/stub.o 00:02:52.775 CXX test/cpp_headers/blobfs.o 00:02:52.775 LINK jsoncat 00:02:52.775 LINK spdk_nvme 00:02:52.775 CXX test/cpp_headers/blob.o 00:02:52.775 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:53.033 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:53.033 LINK idxd_perf 00:02:53.033 CXX test/cpp_headers/conf.o 00:02:53.033 CC test/nvme/reset/reset.o 00:02:53.033 CXX test/cpp_headers/config.o 00:02:53.033 CXX test/cpp_headers/cpuset.o 00:02:53.034 LINK stub 00:02:53.034 CC app/fio/bdev/fio_plugin.o 00:02:53.034 CXX test/cpp_headers/crc16.o 00:02:53.291 CC test/accel/dif/dif.o 00:02:53.291 CXX test/cpp_headers/crc32.o 00:02:53.291 CC test/nvme/sgl/sgl.o 00:02:53.291 CC test/blobfs/mkfs/mkfs.o 00:02:53.550 LINK vhost_fuzz 00:02:53.550 LINK accel_perf 00:02:53.550 CC test/nvme/e2edp/nvme_dp.o 00:02:53.550 CXX test/cpp_headers/crc64.o 00:02:53.550 LINK reset 00:02:53.550 LINK sgl 00:02:53.550 LINK mkfs 00:02:53.550 CXX test/cpp_headers/dif.o 00:02:53.808 LINK spdk_bdev 00:02:53.808 LINK dif 00:02:53.808 LINK nvme_dp 00:02:53.808 CXX test/cpp_headers/dma.o 00:02:53.808 CXX test/cpp_headers/endian.o 00:02:53.808 CC test/nvme/err_injection/err_injection.o 00:02:53.808 CC test/nvme/overhead/overhead.o 00:02:53.808 CC examples/nvme/hello_world/hello_world.o 00:02:54.066 CC examples/blob/hello_world/hello_blob.o 00:02:54.066 CC examples/nvme/reconnect/reconnect.o 00:02:54.066 CXX test/cpp_headers/env_dpdk.o 00:02:54.066 CC test/nvme/startup/startup.o 00:02:54.066 LINK iscsi_fuzz 00:02:54.066 LINK err_injection 00:02:54.066 LINK hello_world 00:02:54.324 CC examples/blob/cli/blobcli.o 00:02:54.324 LINK hello_blob 00:02:54.324 LINK overhead 00:02:54.324 LINK startup 00:02:54.324 CXX test/cpp_headers/env.o 00:02:54.324 CC examples/bdev/hello_world/hello_bdev.o 00:02:54.324 LINK reconnect 00:02:54.583 CC test/nvme/reserve/reserve.o 00:02:54.583 CXX test/cpp_headers/event.o 00:02:54.583 CC test/nvme/simple_copy/simple_copy.o 00:02:54.583 CC examples/bdev/bdevperf/bdevperf.o 00:02:54.583 CC test/nvme/connect_stress/connect_stress.o 00:02:54.841 CC test/nvme/boot_partition/boot_partition.o 00:02:54.841 LINK hello_bdev 00:02:54.841 CC test/lvol/esnap/esnap.o 00:02:54.841 CXX test/cpp_headers/fd_group.o 00:02:54.841 LINK reserve 00:02:54.841 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:54.841 LINK simple_copy 00:02:54.841 LINK blobcli 00:02:54.841 CXX test/cpp_headers/fd.o 00:02:54.841 CXX test/cpp_headers/file.o 00:02:55.100 CXX test/cpp_headers/ftl.o 00:02:55.100 LINK connect_stress 00:02:55.100 LINK boot_partition 00:02:55.100 CC test/nvme/compliance/nvme_compliance.o 00:02:55.100 CXX test/cpp_headers/gpt_spec.o 00:02:55.100 CXX test/cpp_headers/hexlify.o 00:02:55.359 CC test/nvme/fused_ordering/fused_ordering.o 00:02:55.359 CXX test/cpp_headers/histogram_data.o 00:02:55.359 CXX test/cpp_headers/idxd.o 00:02:55.359 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:55.359 LINK nvme_manage 00:02:55.619 CC examples/nvme/arbitration/arbitration.o 00:02:55.619 CC test/bdev/bdevio/bdevio.o 00:02:55.619 LINK fused_ordering 00:02:55.619 CXX test/cpp_headers/idxd_spec.o 00:02:55.619 LINK nvme_compliance 00:02:55.619 LINK bdevperf 00:02:55.619 CC test/nvme/fdp/fdp.o 00:02:55.619 CXX test/cpp_headers/init.o 00:02:55.619 LINK doorbell_aers 00:02:55.877 CC test/nvme/cuse/cuse.o 00:02:55.878 CXX test/cpp_headers/ioat.o 00:02:55.878 CXX test/cpp_headers/ioat_spec.o 00:02:55.878 LINK arbitration 00:02:55.878 CC examples/nvme/hotplug/hotplug.o 00:02:55.878 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:55.878 CC examples/nvme/abort/abort.o 00:02:55.878 LINK bdevio 00:02:56.136 CXX test/cpp_headers/iscsi_spec.o 00:02:56.136 LINK fdp 00:02:56.136 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:56.136 LINK cmb_copy 00:02:56.136 CXX test/cpp_headers/json.o 00:02:56.136 CXX test/cpp_headers/jsonrpc.o 00:02:56.136 LINK hotplug 00:02:56.136 CXX test/cpp_headers/keyring.o 00:02:56.136 CXX test/cpp_headers/keyring_module.o 00:02:56.395 LINK pmr_persistence 00:02:56.395 CXX test/cpp_headers/likely.o 00:02:56.395 CXX test/cpp_headers/log.o 00:02:56.395 CXX test/cpp_headers/lvol.o 00:02:56.395 CXX test/cpp_headers/memory.o 00:02:56.395 CXX test/cpp_headers/mmio.o 00:02:56.395 CXX test/cpp_headers/nbd.o 00:02:56.395 CXX test/cpp_headers/net.o 00:02:56.654 CXX test/cpp_headers/notify.o 00:02:56.654 CXX test/cpp_headers/nvme.o 00:02:56.654 CXX test/cpp_headers/nvme_intel.o 00:02:56.654 LINK abort 00:02:56.654 CXX test/cpp_headers/nvme_ocssd.o 00:02:56.654 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:56.654 CXX test/cpp_headers/nvme_spec.o 00:02:56.654 CXX test/cpp_headers/nvme_zns.o 00:02:56.654 CXX test/cpp_headers/nvmf_cmd.o 00:02:56.912 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:56.912 CXX test/cpp_headers/nvmf.o 00:02:56.912 CXX test/cpp_headers/nvmf_spec.o 00:02:56.912 CXX test/cpp_headers/nvmf_transport.o 00:02:56.912 CXX test/cpp_headers/opal.o 00:02:56.912 CXX test/cpp_headers/opal_spec.o 00:02:56.912 CC examples/nvmf/nvmf/nvmf.o 00:02:57.172 CXX test/cpp_headers/pci_ids.o 00:02:57.172 CXX test/cpp_headers/pipe.o 00:02:57.172 CXX test/cpp_headers/queue.o 00:02:57.172 CXX test/cpp_headers/rpc.o 00:02:57.172 CXX test/cpp_headers/reduce.o 00:02:57.172 CXX test/cpp_headers/scheduler.o 00:02:57.172 CXX test/cpp_headers/scsi.o 00:02:57.172 CXX test/cpp_headers/scsi_spec.o 00:02:57.172 CXX test/cpp_headers/sock.o 00:02:57.431 CXX test/cpp_headers/stdinc.o 00:02:57.431 CXX test/cpp_headers/string.o 00:02:57.431 CXX test/cpp_headers/thread.o 00:02:57.431 CXX test/cpp_headers/trace.o 00:02:57.431 CXX test/cpp_headers/trace_parser.o 00:02:57.431 CXX test/cpp_headers/tree.o 00:02:57.431 CXX test/cpp_headers/ublk.o 00:02:57.431 CXX test/cpp_headers/util.o 00:02:57.431 LINK cuse 00:02:57.431 CXX test/cpp_headers/uuid.o 00:02:57.431 CXX test/cpp_headers/version.o 00:02:57.431 LINK nvmf 00:02:57.693 CXX test/cpp_headers/vfio_user_pci.o 00:02:57.693 CXX test/cpp_headers/vfio_user_spec.o 00:02:57.693 CXX test/cpp_headers/vhost.o 00:02:57.693 CXX test/cpp_headers/vmd.o 00:02:57.693 CXX test/cpp_headers/xor.o 00:02:57.693 CXX test/cpp_headers/zipf.o 00:03:03.014 LINK esnap 00:03:03.014 00:03:03.014 real 1m19.798s 00:03:03.014 user 7m26.759s 00:03:03.014 sys 1m42.902s 00:03:03.014 11:14:18 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:03.014 11:14:18 make -- common/autotest_common.sh@10 -- $ set +x 00:03:03.014 ************************************ 00:03:03.014 END TEST make 00:03:03.014 ************************************ 00:03:03.014 11:14:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:03.014 11:14:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:03.014 11:14:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:03.014 11:14:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.014 11:14:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:03.014 11:14:18 -- pm/common@44 -- $ pid=5143 00:03:03.014 11:14:18 -- pm/common@50 -- $ kill -TERM 5143 00:03:03.014 11:14:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.014 11:14:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:03.014 11:14:18 -- pm/common@44 -- $ pid=5145 00:03:03.014 11:14:18 -- pm/common@50 -- $ kill -TERM 5145 00:03:03.014 11:14:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:03.014 11:14:18 -- nvmf/common.sh@7 -- # uname -s 00:03:03.014 11:14:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:03.014 11:14:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:03.014 11:14:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:03.014 11:14:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:03.014 11:14:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:03.014 11:14:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:03.014 11:14:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:03.014 11:14:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:03.014 11:14:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:03.014 11:14:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:03.014 11:14:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:05a31262-bec0-4fe6-8d87-4b5d66f447c5 00:03:03.014 11:14:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=05a31262-bec0-4fe6-8d87-4b5d66f447c5 00:03:03.015 11:14:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:03.015 11:14:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:03.015 11:14:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:03.015 11:14:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:03.015 11:14:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:03.015 11:14:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:03.015 11:14:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:03.015 11:14:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:03.015 11:14:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.015 11:14:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.015 11:14:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.015 11:14:18 -- paths/export.sh@5 -- # export PATH 00:03:03.015 11:14:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.015 11:14:18 -- nvmf/common.sh@47 -- # : 0 00:03:03.015 11:14:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:03.015 11:14:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:03.015 11:14:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:03.015 11:14:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:03.015 11:14:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:03.015 11:14:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:03.015 11:14:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:03.015 11:14:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:03.015 11:14:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:03.015 11:14:18 -- spdk/autotest.sh@32 -- # uname -s 00:03:03.015 11:14:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:03.015 11:14:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:03.015 11:14:18 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:03.015 11:14:18 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:03.015 11:14:18 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:03.015 11:14:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:03.015 11:14:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:03.015 11:14:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:03.015 11:14:18 -- spdk/autotest.sh@48 -- # udevadm_pid=52819 00:03:03.015 11:14:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:03.015 11:14:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:03.015 11:14:18 -- pm/common@17 -- # local monitor 00:03:03.015 11:14:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.273 11:14:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.274 11:14:18 -- pm/common@25 -- # sleep 1 00:03:03.274 11:14:18 -- pm/common@21 -- # date +%s 00:03:03.274 11:14:18 -- pm/common@21 -- # date +%s 00:03:03.274 11:14:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721906058 00:03:03.274 11:14:18 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721906058 00:03:03.274 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721906058_collect-vmstat.pm.log 00:03:03.274 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721906058_collect-cpu-load.pm.log 00:03:04.209 11:14:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:04.209 11:14:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:04.209 11:14:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:04.209 11:14:19 -- common/autotest_common.sh@10 -- # set +x 00:03:04.209 11:14:19 -- spdk/autotest.sh@59 -- # create_test_list 00:03:04.209 11:14:19 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:04.209 11:14:19 -- common/autotest_common.sh@10 -- # set +x 00:03:04.209 11:14:19 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:04.209 11:14:19 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:04.209 11:14:19 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:04.209 11:14:19 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:04.209 11:14:19 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:04.209 11:14:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:04.209 11:14:19 -- common/autotest_common.sh@1455 -- # uname 00:03:04.209 11:14:19 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:04.209 11:14:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:04.209 11:14:19 -- common/autotest_common.sh@1475 -- # uname 00:03:04.209 11:14:19 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:04.209 11:14:19 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:04.209 11:14:19 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:04.209 11:14:19 -- spdk/autotest.sh@72 -- # hash lcov 00:03:04.209 11:14:19 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:04.209 11:14:19 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:04.209 --rc lcov_branch_coverage=1 00:03:04.209 --rc lcov_function_coverage=1 00:03:04.209 --rc genhtml_branch_coverage=1 00:03:04.209 --rc genhtml_function_coverage=1 00:03:04.209 --rc genhtml_legend=1 00:03:04.209 --rc geninfo_all_blocks=1 00:03:04.209 ' 00:03:04.209 11:14:19 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:04.209 --rc lcov_branch_coverage=1 00:03:04.209 --rc lcov_function_coverage=1 00:03:04.209 --rc genhtml_branch_coverage=1 00:03:04.209 --rc genhtml_function_coverage=1 00:03:04.209 --rc genhtml_legend=1 00:03:04.209 --rc geninfo_all_blocks=1 00:03:04.209 ' 00:03:04.209 11:14:19 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:04.209 --rc lcov_branch_coverage=1 00:03:04.209 --rc lcov_function_coverage=1 00:03:04.209 --rc genhtml_branch_coverage=1 00:03:04.209 --rc genhtml_function_coverage=1 00:03:04.209 --rc genhtml_legend=1 00:03:04.209 --rc geninfo_all_blocks=1 00:03:04.209 --no-external' 00:03:04.209 11:14:19 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:04.209 --rc lcov_branch_coverage=1 00:03:04.209 --rc lcov_function_coverage=1 00:03:04.209 --rc genhtml_branch_coverage=1 00:03:04.209 --rc genhtml_function_coverage=1 00:03:04.209 --rc genhtml_legend=1 00:03:04.209 --rc geninfo_all_blocks=1 00:03:04.209 --no-external' 00:03:04.209 11:14:19 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:04.209 lcov: LCOV version 1.14 00:03:04.209 11:14:20 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:22.289 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:22.289 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:34.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:34.492 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:34.492 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:34.493 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:34.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:36.393 11:14:52 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:36.393 11:14:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:36.393 11:14:52 -- common/autotest_common.sh@10 -- # set +x 00:03:36.393 11:14:52 -- spdk/autotest.sh@91 -- # rm -f 00:03:36.393 11:14:52 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:36.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:37.218 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:37.218 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:37.218 11:14:52 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:37.218 11:14:52 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:37.218 11:14:52 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:37.218 11:14:52 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:37.218 11:14:52 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.218 11:14:52 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:37.218 11:14:52 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:37.218 11:14:52 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:37.218 11:14:52 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.218 11:14:52 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.218 11:14:52 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:37.218 11:14:52 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:37.218 11:14:52 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:37.218 11:14:52 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.218 11:14:52 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.218 11:14:52 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:37.218 11:14:52 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:37.218 11:14:52 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:37.218 11:14:52 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.218 11:14:52 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.218 11:14:52 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:37.218 11:14:52 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:37.218 11:14:52 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:37.218 11:14:52 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.218 11:14:52 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:37.218 11:14:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.218 11:14:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.218 11:14:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:37.218 11:14:52 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:37.218 11:14:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:37.218 No valid GPT data, bailing 00:03:37.218 11:14:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:37.218 11:14:52 -- scripts/common.sh@391 -- # pt= 00:03:37.218 11:14:52 -- scripts/common.sh@392 -- # return 1 00:03:37.218 11:14:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:37.218 1+0 records in 00:03:37.218 1+0 records out 00:03:37.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413684 s, 253 MB/s 00:03:37.218 11:14:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.218 11:14:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.218 11:14:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:37.218 11:14:52 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:37.218 11:14:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:37.218 No valid GPT data, bailing 00:03:37.218 11:14:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:37.218 11:14:53 -- scripts/common.sh@391 -- # pt= 00:03:37.218 11:14:53 -- scripts/common.sh@392 -- # return 1 00:03:37.218 11:14:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:37.218 1+0 records in 00:03:37.218 1+0 records out 00:03:37.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00352549 s, 297 MB/s 00:03:37.218 11:14:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.218 11:14:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.218 11:14:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:37.218 11:14:53 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:37.218 11:14:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:37.477 No valid GPT data, bailing 00:03:37.477 11:14:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:37.477 11:14:53 -- scripts/common.sh@391 -- # pt= 00:03:37.477 11:14:53 -- scripts/common.sh@392 -- # return 1 00:03:37.477 11:14:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:37.477 1+0 records in 00:03:37.477 1+0 records out 00:03:37.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385836 s, 272 MB/s 00:03:37.477 11:14:53 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.477 11:14:53 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.477 11:14:53 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:37.477 11:14:53 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:37.477 11:14:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:37.477 No valid GPT data, bailing 00:03:37.477 11:14:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:37.477 11:14:53 -- scripts/common.sh@391 -- # pt= 00:03:37.477 11:14:53 -- scripts/common.sh@392 -- # return 1 00:03:37.477 11:14:53 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:37.477 1+0 records in 00:03:37.477 1+0 records out 00:03:37.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465981 s, 225 MB/s 00:03:37.477 11:14:53 -- spdk/autotest.sh@118 -- # sync 00:03:37.477 11:14:53 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:37.477 11:14:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:37.477 11:14:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:39.387 11:14:54 -- spdk/autotest.sh@124 -- # uname -s 00:03:39.387 11:14:54 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:39.387 11:14:54 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:39.387 11:14:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.387 11:14:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.387 11:14:54 -- common/autotest_common.sh@10 -- # set +x 00:03:39.387 ************************************ 00:03:39.387 START TEST setup.sh 00:03:39.387 ************************************ 00:03:39.387 11:14:54 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:39.387 * Looking for test storage... 00:03:39.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:39.387 11:14:54 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:39.387 11:14:54 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:39.387 11:14:54 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:39.387 11:14:54 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:39.387 11:14:54 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:39.387 11:14:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.387 ************************************ 00:03:39.387 START TEST acl 00:03:39.387 ************************************ 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:39.387 * Looking for test storage... 00:03:39.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:39.387 11:14:55 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:39.387 11:14:55 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.387 11:14:55 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:39.387 11:14:55 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:39.387 11:14:55 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:39.387 11:14:55 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:39.387 11:14:55 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:39.387 11:14:55 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.387 11:14:55 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.955 11:14:55 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:39.955 11:14:55 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:39.955 11:14:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.955 11:14:55 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:39.955 11:14:55 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.955 11:14:55 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.522 Hugepages 00:03:40.522 node hugesize free / total 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.522 00:03:40.522 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.522 11:14:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.781 11:14:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:40.781 11:14:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.781 11:14:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:40.781 11:14:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.781 11:14:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.781 11:14:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.781 11:14:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:40.782 11:14:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.782 11:14:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:40.782 11:14:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.782 11:14:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.782 11:14:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.782 11:14:56 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:40.782 11:14:56 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:40.782 11:14:56 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.782 11:14:56 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.782 11:14:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.782 ************************************ 00:03:40.782 START TEST denied 00:03:40.782 ************************************ 00:03:40.782 11:14:56 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:40.782 11:14:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:40.782 11:14:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:40.782 11:14:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.782 11:14:56 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:40.782 11:14:56 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:41.717 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:41.717 11:14:57 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:41.717 11:14:57 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:41.717 11:14:57 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:41.717 11:14:57 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:41.717 11:14:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:41.717 11:14:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:41.717 11:14:57 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:41.717 11:14:57 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:41.717 11:14:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.717 11:14:57 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.284 00:03:42.284 real 0m1.388s 00:03:42.284 user 0m0.572s 00:03:42.284 sys 0m0.758s 00:03:42.284 11:14:57 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.284 11:14:57 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:42.284 ************************************ 00:03:42.284 END TEST denied 00:03:42.284 ************************************ 00:03:42.284 11:14:57 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:42.284 11:14:57 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.284 11:14:57 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.284 11:14:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.284 ************************************ 00:03:42.284 START TEST allowed 00:03:42.284 ************************************ 00:03:42.284 11:14:57 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:42.284 11:14:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:42.284 11:14:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:42.284 11:14:57 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:42.284 11:14:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.284 11:14:57 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:42.853 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.853 11:14:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:42.853 11:14:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:42.853 11:14:58 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:42.853 11:14:58 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:42.853 11:14:58 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:42.853 11:14:58 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:42.853 11:14:58 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:42.853 11:14:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:42.853 11:14:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.853 11:14:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:43.792 00:03:43.792 real 0m1.436s 00:03:43.792 user 0m0.637s 00:03:43.792 sys 0m0.789s 00:03:43.792 11:14:59 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.792 ************************************ 00:03:43.792 END TEST allowed 00:03:43.792 ************************************ 00:03:43.792 11:14:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:43.792 ************************************ 00:03:43.792 END TEST acl 00:03:43.792 ************************************ 00:03:43.792 00:03:43.792 real 0m4.414s 00:03:43.792 user 0m1.931s 00:03:43.792 sys 0m2.430s 00:03:43.792 11:14:59 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.792 11:14:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:43.792 11:14:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:43.792 11:14:59 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.792 11:14:59 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.792 11:14:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.792 ************************************ 00:03:43.792 START TEST hugepages 00:03:43.792 ************************************ 00:03:43.792 11:14:59 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:43.792 * Looking for test storage... 00:03:43.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.792 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5852136 kB' 'MemAvailable: 7411804 kB' 'Buffers: 2436 kB' 'Cached: 1773732 kB' 'SwapCached: 0 kB' 'Active: 435528 kB' 'Inactive: 1445612 kB' 'Active(anon): 115460 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445612 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 107124 kB' 'Mapped: 48548 kB' 'Shmem: 10488 kB' 'KReclaimable: 61856 kB' 'Slab: 133788 kB' 'SReclaimable: 61856 kB' 'SUnreclaim: 71932 kB' 'KernelStack: 6332 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 336852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.793 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.794 11:14:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:43.794 11:14:59 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.794 11:14:59 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.794 11:14:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.794 ************************************ 00:03:43.794 START TEST default_setup 00:03:43.794 ************************************ 00:03:43.794 11:14:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:43.794 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:43.794 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.794 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:43.794 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:43.794 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:43.794 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.795 11:14:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.623 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.623 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7945356 kB' 'MemAvailable: 9504828 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452268 kB' 'Inactive: 1445616 kB' 'Active(anon): 132200 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 123336 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61456 kB' 'Slab: 133428 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 71972 kB' 'KernelStack: 6320 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.623 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.624 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7945364 kB' 'MemAvailable: 9504836 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452348 kB' 'Inactive: 1445616 kB' 'Active(anon): 132280 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 123428 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 61456 kB' 'Slab: 133424 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 71968 kB' 'KernelStack: 6404 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.625 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.626 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.626 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.626 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.626 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.626 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.626 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.626 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.889 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7945428 kB' 'MemAvailable: 9504900 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452260 kB' 'Inactive: 1445616 kB' 'Active(anon): 132192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123296 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61456 kB' 'Slab: 133432 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 71976 kB' 'KernelStack: 6336 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.890 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.891 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.892 nr_hugepages=1024 00:03:44.892 resv_hugepages=0 00:03:44.892 surplus_hugepages=0 00:03:44.892 anon_hugepages=0 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7945428 kB' 'MemAvailable: 9504900 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452184 kB' 'Inactive: 1445616 kB' 'Active(anon): 132116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123292 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61456 kB' 'Slab: 133432 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 71976 kB' 'KernelStack: 6352 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.892 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.893 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7945428 kB' 'MemUsed: 4296544 kB' 'SwapCached: 0 kB' 'Active: 451972 kB' 'Inactive: 1445616 kB' 'Active(anon): 131904 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445616 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1776156 kB' 'Mapped: 48524 kB' 'AnonPages: 123052 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61456 kB' 'Slab: 133432 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 71976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.894 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.895 node0=1024 expecting 1024 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.895 00:03:44.895 real 0m0.985s 00:03:44.895 user 0m0.457s 00:03:44.895 sys 0m0.457s 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.895 ************************************ 00:03:44.895 END TEST default_setup 00:03:44.895 11:15:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:44.895 ************************************ 00:03:44.895 11:15:00 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:44.895 11:15:00 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:44.895 11:15:00 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:44.895 11:15:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.895 ************************************ 00:03:44.895 START TEST per_node_1G_alloc 00:03:44.895 ************************************ 00:03:44.895 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:44.895 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:44.895 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:44.895 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.895 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.896 11:15:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.156 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.156 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8998684 kB' 'MemAvailable: 10558164 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452328 kB' 'Inactive: 1445624 kB' 'Active(anon): 132260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123428 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61456 kB' 'Slab: 133464 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 72008 kB' 'KernelStack: 6392 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.156 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.157 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.158 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8999128 kB' 'MemAvailable: 10558608 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452324 kB' 'Inactive: 1445624 kB' 'Active(anon): 132256 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123364 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 61456 kB' 'Slab: 133456 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 72000 kB' 'KernelStack: 6328 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.421 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.422 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8999128 kB' 'MemAvailable: 10558608 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452192 kB' 'Inactive: 1445624 kB' 'Active(anon): 132124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123232 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61456 kB' 'Slab: 133468 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 72012 kB' 'KernelStack: 6336 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.423 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.424 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.425 nr_hugepages=512 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:45.425 resv_hugepages=0 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.425 surplus_hugepages=0 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.425 anon_hugepages=0 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.425 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8999128 kB' 'MemAvailable: 10558608 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452172 kB' 'Inactive: 1445624 kB' 'Active(anon): 132104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123208 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61456 kB' 'Slab: 133468 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 72012 kB' 'KernelStack: 6320 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.426 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.427 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8999128 kB' 'MemUsed: 3242844 kB' 'SwapCached: 0 kB' 'Active: 452196 kB' 'Inactive: 1445624 kB' 'Active(anon): 132128 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1776156 kB' 'Mapped: 48524 kB' 'AnonPages: 123236 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61456 kB' 'Slab: 133468 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 72012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.428 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.429 node0=512 expecting 512 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:45.429 00:03:45.429 real 0m0.518s 00:03:45.429 user 0m0.275s 00:03:45.429 sys 0m0.277s 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.429 11:15:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.429 ************************************ 00:03:45.429 END TEST per_node_1G_alloc 00:03:45.429 ************************************ 00:03:45.429 11:15:01 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:45.429 11:15:01 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.429 11:15:01 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.429 11:15:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.429 ************************************ 00:03:45.429 START TEST even_2G_alloc 00:03:45.429 ************************************ 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:45.429 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.430 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.689 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.689 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.689 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.957 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.957 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.957 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.957 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7956804 kB' 'MemAvailable: 9516284 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452792 kB' 'Inactive: 1445624 kB' 'Active(anon): 132724 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123852 kB' 'Mapped: 48648 kB' 'Shmem: 10464 kB' 'KReclaimable: 61456 kB' 'Slab: 133472 kB' 'SReclaimable: 61456 kB' 'SUnreclaim: 72016 kB' 'KernelStack: 6340 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.958 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7956552 kB' 'MemAvailable: 9516040 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452016 kB' 'Inactive: 1445624 kB' 'Active(anon): 131948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123068 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133484 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72012 kB' 'KernelStack: 6352 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.959 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.960 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7956552 kB' 'MemAvailable: 9516040 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452296 kB' 'Inactive: 1445624 kB' 'Active(anon): 132228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123344 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133484 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72012 kB' 'KernelStack: 6352 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.961 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.962 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.963 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.964 nr_hugepages=1024 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.964 resv_hugepages=0 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.964 surplus_hugepages=0 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.964 anon_hugepages=0 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7956552 kB' 'MemAvailable: 9516040 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452020 kB' 'Inactive: 1445624 kB' 'Active(anon): 131952 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123072 kB' 'Mapped: 48524 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133484 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72012 kB' 'KernelStack: 6352 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.964 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.965 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7956552 kB' 'MemUsed: 4285420 kB' 'SwapCached: 0 kB' 'Active: 452164 kB' 'Inactive: 1445624 kB' 'Active(anon): 132096 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1776156 kB' 'Mapped: 48784 kB' 'AnonPages: 123312 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61472 kB' 'Slab: 133492 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.966 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.967 node0=1024 expecting 1024 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.967 00:03:45.967 real 0m0.519s 00:03:45.967 user 0m0.270s 00:03:45.967 sys 0m0.280s 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.967 11:15:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.967 ************************************ 00:03:45.967 END TEST even_2G_alloc 00:03:45.967 ************************************ 00:03:45.967 11:15:01 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:45.967 11:15:01 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.967 11:15:01 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.967 11:15:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.967 ************************************ 00:03:45.967 START TEST odd_alloc 00:03:45.967 ************************************ 00:03:45.967 11:15:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:45.967 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:45.967 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.968 11:15:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.489 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.489 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7947452 kB' 'MemAvailable: 9506940 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452704 kB' 'Inactive: 1445624 kB' 'Active(anon): 132636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123532 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133444 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71972 kB' 'KernelStack: 6376 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.489 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.490 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7947452 kB' 'MemAvailable: 9506940 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452764 kB' 'Inactive: 1445624 kB' 'Active(anon): 132696 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123592 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133436 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71964 kB' 'KernelStack: 6344 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.491 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:46.492 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7947452 kB' 'MemAvailable: 9506940 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452296 kB' 'Inactive: 1445624 kB' 'Active(anon): 132228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123336 kB' 'Mapped: 48528 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133488 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72016 kB' 'KernelStack: 6352 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.493 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.494 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:46.495 nr_hugepages=1025 00:03:46.495 resv_hugepages=0 00:03:46.495 surplus_hugepages=0 00:03:46.495 anon_hugepages=0 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7947452 kB' 'MemAvailable: 9506940 kB' 'Buffers: 2436 kB' 'Cached: 1773720 kB' 'SwapCached: 0 kB' 'Active: 452220 kB' 'Inactive: 1445624 kB' 'Active(anon): 132152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123348 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133488 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72016 kB' 'KernelStack: 6336 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.495 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.496 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7947452 kB' 'MemUsed: 4294520 kB' 'SwapCached: 0 kB' 'Active: 452280 kB' 'Inactive: 1445628 kB' 'Active(anon): 132212 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1776160 kB' 'Mapped: 48528 kB' 'AnonPages: 123324 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61472 kB' 'Slab: 133488 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.497 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.498 node0=1025 expecting 1025 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:46.498 00:03:46.498 real 0m0.530s 00:03:46.498 user 0m0.274s 00:03:46.498 sys 0m0.289s 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.498 11:15:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.498 ************************************ 00:03:46.498 END TEST odd_alloc 00:03:46.498 ************************************ 00:03:46.498 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:46.498 11:15:02 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.498 11:15:02 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.498 11:15:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.498 ************************************ 00:03:46.498 START TEST custom_alloc 00:03:46.498 ************************************ 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:46.498 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:46.499 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:46.499 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.499 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.071 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.071 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9001168 kB' 'MemAvailable: 10560660 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452564 kB' 'Inactive: 1445628 kB' 'Active(anon): 132496 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123704 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133484 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72012 kB' 'KernelStack: 6356 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.071 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.072 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9001420 kB' 'MemAvailable: 10560912 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452164 kB' 'Inactive: 1445628 kB' 'Active(anon): 132096 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123276 kB' 'Mapped: 48528 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133468 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71996 kB' 'KernelStack: 6336 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.073 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.074 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9001672 kB' 'MemAvailable: 10561164 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452188 kB' 'Inactive: 1445628 kB' 'Active(anon): 132120 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123300 kB' 'Mapped: 48528 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133472 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72000 kB' 'KernelStack: 6336 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.075 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.076 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.077 nr_hugepages=512 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:47.077 resv_hugepages=0 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.077 surplus_hugepages=0 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.077 anon_hugepages=0 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9001936 kB' 'MemAvailable: 10561428 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452140 kB' 'Inactive: 1445628 kB' 'Active(anon): 132072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123252 kB' 'Mapped: 48528 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133468 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71996 kB' 'KernelStack: 6336 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.077 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.078 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9001936 kB' 'MemUsed: 3240036 kB' 'SwapCached: 0 kB' 'Active: 452184 kB' 'Inactive: 1445628 kB' 'Active(anon): 132116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1776160 kB' 'Mapped: 48528 kB' 'AnonPages: 123296 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61472 kB' 'Slab: 133468 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.079 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.080 node0=512 expecting 512 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.080 00:03:47.080 real 0m0.549s 00:03:47.080 user 0m0.262s 00:03:47.080 sys 0m0.297s 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.080 11:15:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.080 ************************************ 00:03:47.080 END TEST custom_alloc 00:03:47.080 ************************************ 00:03:47.080 11:15:02 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:47.081 11:15:02 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.081 11:15:02 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.081 11:15:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.081 ************************************ 00:03:47.081 START TEST no_shrink_alloc 00:03:47.081 ************************************ 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.081 11:15:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.656 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.656 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7950428 kB' 'MemAvailable: 9509920 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452220 kB' 'Inactive: 1445628 kB' 'Active(anon): 132152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123044 kB' 'Mapped: 48652 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133464 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6352 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.656 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.657 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7950428 kB' 'MemAvailable: 9509920 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452252 kB' 'Inactive: 1445628 kB' 'Active(anon): 132184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123312 kB' 'Mapped: 48652 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133464 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6352 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.658 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.659 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7950428 kB' 'MemAvailable: 9509920 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452088 kB' 'Inactive: 1445628 kB' 'Active(anon): 132020 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123404 kB' 'Mapped: 48528 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133464 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6336 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.660 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.661 nr_hugepages=1024 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.661 resv_hugepages=0 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.661 surplus_hugepages=0 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.661 anon_hugepages=0 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7950428 kB' 'MemAvailable: 9509920 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452004 kB' 'Inactive: 1445628 kB' 'Active(anon): 131936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123100 kB' 'Mapped: 48528 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133464 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71992 kB' 'KernelStack: 6352 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.661 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7951096 kB' 'MemUsed: 4290876 kB' 'SwapCached: 0 kB' 'Active: 451964 kB' 'Inactive: 1445628 kB' 'Active(anon): 131896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1776160 kB' 'Mapped: 48528 kB' 'AnonPages: 123276 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61472 kB' 'Slab: 133464 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.662 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.663 node0=1024 expecting 1024 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.663 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.919 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.920 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.182 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7951396 kB' 'MemAvailable: 9510888 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452896 kB' 'Inactive: 1445628 kB' 'Active(anon): 132828 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123992 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133480 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72008 kB' 'KernelStack: 6416 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.182 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.183 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7951396 kB' 'MemAvailable: 9510888 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452132 kB' 'Inactive: 1445628 kB' 'Active(anon): 132064 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123412 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133480 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 72008 kB' 'KernelStack: 6364 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.184 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.185 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7951396 kB' 'MemAvailable: 9510888 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452180 kB' 'Inactive: 1445628 kB' 'Active(anon): 132112 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123280 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133468 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71996 kB' 'KernelStack: 6396 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.186 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.187 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.188 nr_hugepages=1024 00:03:48.188 resv_hugepages=0 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.188 surplus_hugepages=0 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.188 anon_hugepages=0 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7951396 kB' 'MemAvailable: 9510888 kB' 'Buffers: 2436 kB' 'Cached: 1773724 kB' 'SwapCached: 0 kB' 'Active: 452212 kB' 'Inactive: 1445628 kB' 'Active(anon): 132144 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123540 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 61472 kB' 'Slab: 133468 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71996 kB' 'KernelStack: 6396 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.188 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.189 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7951396 kB' 'MemUsed: 4290576 kB' 'SwapCached: 0 kB' 'Active: 452216 kB' 'Inactive: 1445628 kB' 'Active(anon): 132148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1445628 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1776160 kB' 'Mapped: 48588 kB' 'AnonPages: 123560 kB' 'Shmem: 10464 kB' 'KernelStack: 6412 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61472 kB' 'Slab: 133468 kB' 'SReclaimable: 61472 kB' 'SUnreclaim: 71996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.190 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.191 node0=1024 expecting 1024 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.191 00:03:48.191 real 0m1.031s 00:03:48.191 user 0m0.547s 00:03:48.191 sys 0m0.548s 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.191 11:15:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.191 ************************************ 00:03:48.191 END TEST no_shrink_alloc 00:03:48.191 ************************************ 00:03:48.191 11:15:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:48.191 11:15:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:48.191 11:15:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.191 11:15:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.191 11:15:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.191 11:15:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.192 11:15:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.192 11:15:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.192 11:15:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.192 00:03:48.192 real 0m4.544s 00:03:48.192 user 0m2.248s 00:03:48.192 sys 0m2.383s 00:03:48.192 11:15:04 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.192 11:15:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.192 ************************************ 00:03:48.192 END TEST hugepages 00:03:48.192 ************************************ 00:03:48.192 11:15:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:48.192 11:15:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.192 11:15:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.192 11:15:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.449 ************************************ 00:03:48.449 START TEST driver 00:03:48.449 ************************************ 00:03:48.449 11:15:04 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:48.449 * Looking for test storage... 00:03:48.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:48.449 11:15:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:48.449 11:15:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.449 11:15:04 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.015 11:15:04 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:49.015 11:15:04 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.015 11:15:04 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.015 11:15:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.015 ************************************ 00:03:49.015 START TEST guess_driver 00:03:49.015 ************************************ 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:49.015 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:49.015 Looking for driver=uio_pci_generic 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.015 11:15:04 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:49.613 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:49.613 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:49.613 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.613 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.613 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:49.613 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.613 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.613 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:49.613 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.872 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:49.872 11:15:05 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:49.872 11:15:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.872 11:15:05 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.438 00:03:50.438 real 0m1.367s 00:03:50.438 user 0m0.525s 00:03:50.438 sys 0m0.820s 00:03:50.438 11:15:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.438 11:15:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.438 ************************************ 00:03:50.438 END TEST guess_driver 00:03:50.438 ************************************ 00:03:50.438 00:03:50.438 real 0m2.033s 00:03:50.438 user 0m0.755s 00:03:50.438 sys 0m1.300s 00:03:50.438 11:15:06 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:50.438 11:15:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.438 ************************************ 00:03:50.438 END TEST driver 00:03:50.438 ************************************ 00:03:50.438 11:15:06 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:50.438 11:15:06 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.438 11:15:06 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.438 11:15:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.438 ************************************ 00:03:50.438 START TEST devices 00:03:50.438 ************************************ 00:03:50.438 11:15:06 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:50.438 * Looking for test storage... 00:03:50.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.438 11:15:06 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:50.438 11:15:06 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:50.438 11:15:06 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.438 11:15:06 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:51.372 11:15:06 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:51.372 11:15:06 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:51.372 11:15:06 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:51.372 11:15:06 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:51.372 No valid GPT data, bailing 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:51.372 11:15:07 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:51.372 11:15:07 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:51.372 11:15:07 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:51.372 No valid GPT data, bailing 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:51.372 11:15:07 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:51.372 11:15:07 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:51.372 11:15:07 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:51.372 No valid GPT data, bailing 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:51.372 11:15:07 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:51.372 11:15:07 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:51.372 11:15:07 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:51.372 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:51.372 11:15:07 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:51.372 No valid GPT data, bailing 00:03:51.630 11:15:07 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:51.630 11:15:07 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.630 11:15:07 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.630 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:51.630 11:15:07 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:51.630 11:15:07 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:51.630 11:15:07 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:51.630 11:15:07 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:51.630 11:15:07 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.630 11:15:07 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:51.630 11:15:07 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:51.630 11:15:07 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:51.630 11:15:07 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:51.630 11:15:07 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.630 11:15:07 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.630 11:15:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:51.630 ************************************ 00:03:51.630 START TEST nvme_mount 00:03:51.630 ************************************ 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:51.630 11:15:07 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:52.564 Creating new GPT entries in memory. 00:03:52.564 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:52.564 other utilities. 00:03:52.564 11:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:52.564 11:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.564 11:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.564 11:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.564 11:15:08 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:53.500 Creating new GPT entries in memory. 00:03:53.500 The operation has completed successfully. 00:03:53.500 11:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:53.500 11:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.500 11:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57034 00:03:53.500 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.500 11:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:53.500 11:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.500 11:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:53.500 11:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.759 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:54.023 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.023 11:15:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.297 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:54.297 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:54.297 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:54.297 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:54.297 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:54.297 11:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:54.297 11:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.297 11:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:54.297 11:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.556 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.815 11:15:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.079 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.079 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:55.079 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:55.079 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.079 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.079 11:15:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.337 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.337 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.337 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.337 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.337 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.337 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:55.337 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:55.337 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:55.337 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.337 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.338 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.338 11:15:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.338 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.338 00:03:55.338 real 0m3.912s 00:03:55.338 user 0m0.662s 00:03:55.338 sys 0m0.977s 00:03:55.338 11:15:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.338 11:15:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:55.338 ************************************ 00:03:55.338 END TEST nvme_mount 00:03:55.338 ************************************ 00:03:55.595 11:15:11 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:55.595 11:15:11 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.595 11:15:11 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.595 11:15:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:55.595 ************************************ 00:03:55.595 START TEST dm_mount 00:03:55.595 ************************************ 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:55.595 11:15:11 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:56.529 Creating new GPT entries in memory. 00:03:56.529 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:56.529 other utilities. 00:03:56.529 11:15:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:56.529 11:15:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.529 11:15:12 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.529 11:15:12 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.529 11:15:12 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:57.464 Creating new GPT entries in memory. 00:03:57.464 The operation has completed successfully. 00:03:57.464 11:15:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:57.464 11:15:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.464 11:15:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.464 11:15:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.464 11:15:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:58.857 The operation has completed successfully. 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57470 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:58.857 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:58.858 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.858 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.858 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.115 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.115 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.115 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.115 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.115 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.115 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:59.115 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.115 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:59.115 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.115 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.116 11:15:14 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:59.374 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.374 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:59.374 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:59.374 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.374 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.374 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.374 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.374 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:59.633 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:59.633 00:03:59.633 real 0m4.180s 00:03:59.633 user 0m0.432s 00:03:59.633 sys 0m0.699s 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.633 11:15:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:59.633 ************************************ 00:03:59.633 END TEST dm_mount 00:03:59.633 ************************************ 00:03:59.633 11:15:15 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:59.633 11:15:15 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:59.633 11:15:15 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.633 11:15:15 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.633 11:15:15 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:59.633 11:15:15 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.633 11:15:15 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.892 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:59.892 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:59.892 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:59.892 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:59.892 11:15:15 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:59.892 11:15:15 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.892 11:15:15 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:59.892 11:15:15 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.892 11:15:15 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:59.892 11:15:15 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.892 11:15:15 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:59.892 ************************************ 00:03:59.892 END TEST devices 00:03:59.892 ************************************ 00:03:59.892 00:03:59.892 real 0m9.610s 00:03:59.892 user 0m1.736s 00:03:59.892 sys 0m2.268s 00:03:59.892 11:15:15 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.892 11:15:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:00.150 00:04:00.150 real 0m20.871s 00:04:00.150 user 0m6.763s 00:04:00.150 sys 0m8.551s 00:04:00.150 11:15:15 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.150 11:15:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:00.150 ************************************ 00:04:00.150 END TEST setup.sh 00:04:00.150 ************************************ 00:04:00.150 11:15:15 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.719 Hugepages 00:04:00.719 node hugesize free / total 00:04:00.719 node0 1048576kB 0 / 0 00:04:00.719 node0 2048kB 2048 / 2048 00:04:00.719 00:04:00.719 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.719 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:00.978 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:00.978 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:00.978 11:15:16 -- spdk/autotest.sh@130 -- # uname -s 00:04:00.978 11:15:16 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:00.978 11:15:16 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:00.978 11:15:16 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.545 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.804 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.804 11:15:17 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:02.741 11:15:18 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:02.741 11:15:18 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:02.741 11:15:18 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.741 11:15:18 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:02.741 11:15:18 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:02.741 11:15:18 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:02.741 11:15:18 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.741 11:15:18 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:02.741 11:15:18 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:02.741 11:15:18 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:02.741 11:15:18 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.741 11:15:18 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.309 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.309 Waiting for block devices as requested 00:04:03.309 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.309 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.309 11:15:19 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:03.309 11:15:19 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:03.309 11:15:19 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:03.309 11:15:19 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:03.309 11:15:19 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.309 11:15:19 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:03.309 11:15:19 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.309 11:15:19 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:03.309 11:15:19 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:03.309 11:15:19 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:03.309 11:15:19 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:03.309 11:15:19 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:03.309 11:15:19 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:03.309 11:15:19 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:03.309 11:15:19 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:03.309 11:15:19 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:03.309 11:15:19 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:03.309 11:15:19 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:03.309 11:15:19 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:03.309 11:15:19 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:03.309 11:15:19 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:03.309 11:15:19 -- common/autotest_common.sh@1557 -- # continue 00:04:03.309 11:15:19 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:03.309 11:15:19 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:03.309 11:15:19 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:03.309 11:15:19 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:03.309 11:15:19 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.309 11:15:19 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:03.309 11:15:19 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.309 11:15:19 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:03.309 11:15:19 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:03.309 11:15:19 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:03.309 11:15:19 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:03.309 11:15:19 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:03.309 11:15:19 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:03.309 11:15:19 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:03.309 11:15:19 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:03.309 11:15:19 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:03.568 11:15:19 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:03.568 11:15:19 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:03.568 11:15:19 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:03.568 11:15:19 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:03.568 11:15:19 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:03.568 11:15:19 -- common/autotest_common.sh@1557 -- # continue 00:04:03.568 11:15:19 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:03.568 11:15:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.568 11:15:19 -- common/autotest_common.sh@10 -- # set +x 00:04:03.568 11:15:19 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:03.568 11:15:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.568 11:15:19 -- common/autotest_common.sh@10 -- # set +x 00:04:03.568 11:15:19 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.135 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.394 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.394 11:15:20 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:04.394 11:15:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.394 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:04:04.394 11:15:20 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:04.394 11:15:20 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:04.394 11:15:20 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:04.394 11:15:20 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:04.394 11:15:20 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:04.394 11:15:20 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:04.394 11:15:20 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:04.394 11:15:20 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:04.394 11:15:20 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.394 11:15:20 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.394 11:15:20 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:04.394 11:15:20 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:04.394 11:15:20 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:04.394 11:15:20 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:04.394 11:15:20 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:04.394 11:15:20 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:04.394 11:15:20 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.394 11:15:20 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:04.394 11:15:20 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:04.394 11:15:20 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:04.394 11:15:20 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.394 11:15:20 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:04.394 11:15:20 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:04.394 11:15:20 -- common/autotest_common.sh@1593 -- # return 0 00:04:04.394 11:15:20 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:04.394 11:15:20 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:04.394 11:15:20 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:04.394 11:15:20 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:04.394 11:15:20 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:04.394 11:15:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.394 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:04:04.394 11:15:20 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:04.394 11:15:20 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.394 11:15:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.394 11:15:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.394 11:15:20 -- common/autotest_common.sh@10 -- # set +x 00:04:04.394 ************************************ 00:04:04.394 START TEST env 00:04:04.394 ************************************ 00:04:04.394 11:15:20 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.653 * Looking for test storage... 00:04:04.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:04.653 11:15:20 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.653 11:15:20 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.653 11:15:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.653 11:15:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.653 ************************************ 00:04:04.653 START TEST env_memory 00:04:04.653 ************************************ 00:04:04.653 11:15:20 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.653 00:04:04.653 00:04:04.653 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.653 http://cunit.sourceforge.net/ 00:04:04.653 00:04:04.653 00:04:04.653 Suite: memory 00:04:04.653 Test: alloc and free memory map ...[2024-07-25 11:15:20.381258] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.653 passed 00:04:04.653 Test: mem map translation ...[2024-07-25 11:15:20.441868] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.653 [2024-07-25 11:15:20.441938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.653 [2024-07-25 11:15:20.442035] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.653 [2024-07-25 11:15:20.442066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.653 passed 00:04:04.912 Test: mem map registration ...[2024-07-25 11:15:20.540057] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:04.913 [2024-07-25 11:15:20.540125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:04.913 passed 00:04:04.913 Test: mem map adjacent registrations ...passed 00:04:04.913 00:04:04.913 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.913 suites 1 1 n/a 0 0 00:04:04.913 tests 4 4 4 0 0 00:04:04.913 asserts 152 152 152 0 n/a 00:04:04.913 00:04:04.913 Elapsed time = 0.342 seconds 00:04:04.913 00:04:04.913 real 0m0.377s 00:04:04.913 user 0m0.346s 00:04:04.913 sys 0m0.027s 00:04:04.913 11:15:20 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.913 11:15:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.913 ************************************ 00:04:04.913 END TEST env_memory 00:04:04.913 ************************************ 00:04:04.913 11:15:20 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.913 11:15:20 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.913 11:15:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.913 11:15:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.913 ************************************ 00:04:04.913 START TEST env_vtophys 00:04:04.913 ************************************ 00:04:04.913 11:15:20 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.913 EAL: lib.eal log level changed from notice to debug 00:04:04.913 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.913 EAL: Detected lcore 1 as core 0 on socket 0 00:04:04.913 EAL: Detected lcore 2 as core 0 on socket 0 00:04:04.913 EAL: Detected lcore 3 as core 0 on socket 0 00:04:04.913 EAL: Detected lcore 4 as core 0 on socket 0 00:04:04.913 EAL: Detected lcore 5 as core 0 on socket 0 00:04:04.913 EAL: Detected lcore 6 as core 0 on socket 0 00:04:04.913 EAL: Detected lcore 7 as core 0 on socket 0 00:04:04.913 EAL: Detected lcore 8 as core 0 on socket 0 00:04:04.913 EAL: Detected lcore 9 as core 0 on socket 0 00:04:05.171 EAL: Maximum logical cores by configuration: 128 00:04:05.171 EAL: Detected CPU lcores: 10 00:04:05.171 EAL: Detected NUMA nodes: 1 00:04:05.171 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:05.171 EAL: Detected shared linkage of DPDK 00:04:05.171 EAL: No shared files mode enabled, IPC will be disabled 00:04:05.171 EAL: Selected IOVA mode 'PA' 00:04:05.171 EAL: Probing VFIO support... 00:04:05.171 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.171 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:05.171 EAL: Ask a virtual area of 0x2e000 bytes 00:04:05.171 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:05.171 EAL: Setting up physically contiguous memory... 00:04:05.171 EAL: Setting maximum number of open files to 524288 00:04:05.171 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:05.171 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:05.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.171 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:05.171 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.171 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.171 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:05.171 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:05.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.171 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:05.171 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.171 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.171 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:05.171 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:05.171 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.171 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:05.172 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.172 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.172 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:05.172 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:05.172 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.172 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:05.172 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.172 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.172 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:05.172 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:05.172 EAL: Hugepages will be freed exactly as allocated. 00:04:05.172 EAL: No shared files mode enabled, IPC is disabled 00:04:05.172 EAL: No shared files mode enabled, IPC is disabled 00:04:05.172 EAL: TSC frequency is ~2200000 KHz 00:04:05.172 EAL: Main lcore 0 is ready (tid=7fc2615d9a40;cpuset=[0]) 00:04:05.172 EAL: Trying to obtain current memory policy. 00:04:05.172 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.172 EAL: Restoring previous memory policy: 0 00:04:05.172 EAL: request: mp_malloc_sync 00:04:05.172 EAL: No shared files mode enabled, IPC is disabled 00:04:05.172 EAL: Heap on socket 0 was expanded by 2MB 00:04:05.172 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.172 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:05.172 EAL: Mem event callback 'spdk:(nil)' registered 00:04:05.172 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:05.172 00:04:05.172 00:04:05.172 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.172 http://cunit.sourceforge.net/ 00:04:05.172 00:04:05.172 00:04:05.172 Suite: components_suite 00:04:05.739 Test: vtophys_malloc_test ...passed 00:04:05.739 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:05.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.739 EAL: Restoring previous memory policy: 4 00:04:05.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.739 EAL: request: mp_malloc_sync 00:04:05.739 EAL: No shared files mode enabled, IPC is disabled 00:04:05.739 EAL: Heap on socket 0 was expanded by 4MB 00:04:05.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.739 EAL: request: mp_malloc_sync 00:04:05.739 EAL: No shared files mode enabled, IPC is disabled 00:04:05.739 EAL: Heap on socket 0 was shrunk by 4MB 00:04:05.739 EAL: Trying to obtain current memory policy. 00:04:05.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.739 EAL: Restoring previous memory policy: 4 00:04:05.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.739 EAL: request: mp_malloc_sync 00:04:05.739 EAL: No shared files mode enabled, IPC is disabled 00:04:05.739 EAL: Heap on socket 0 was expanded by 6MB 00:04:05.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.739 EAL: request: mp_malloc_sync 00:04:05.739 EAL: No shared files mode enabled, IPC is disabled 00:04:05.739 EAL: Heap on socket 0 was shrunk by 6MB 00:04:05.739 EAL: Trying to obtain current memory policy. 00:04:05.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.739 EAL: Restoring previous memory policy: 4 00:04:05.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.739 EAL: request: mp_malloc_sync 00:04:05.739 EAL: No shared files mode enabled, IPC is disabled 00:04:05.739 EAL: Heap on socket 0 was expanded by 10MB 00:04:05.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.739 EAL: request: mp_malloc_sync 00:04:05.739 EAL: No shared files mode enabled, IPC is disabled 00:04:05.739 EAL: Heap on socket 0 was shrunk by 10MB 00:04:05.739 EAL: Trying to obtain current memory policy. 00:04:05.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.739 EAL: Restoring previous memory policy: 4 00:04:05.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.739 EAL: request: mp_malloc_sync 00:04:05.739 EAL: No shared files mode enabled, IPC is disabled 00:04:05.739 EAL: Heap on socket 0 was expanded by 18MB 00:04:05.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.739 EAL: request: mp_malloc_sync 00:04:05.739 EAL: No shared files mode enabled, IPC is disabled 00:04:05.739 EAL: Heap on socket 0 was shrunk by 18MB 00:04:05.739 EAL: Trying to obtain current memory policy. 00:04:05.739 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.739 EAL: Restoring previous memory policy: 4 00:04:05.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.739 EAL: request: mp_malloc_sync 00:04:05.739 EAL: No shared files mode enabled, IPC is disabled 00:04:05.739 EAL: Heap on socket 0 was expanded by 34MB 00:04:05.739 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.739 EAL: request: mp_malloc_sync 00:04:05.739 EAL: No shared files mode enabled, IPC is disabled 00:04:05.739 EAL: Heap on socket 0 was shrunk by 34MB 00:04:05.997 EAL: Trying to obtain current memory policy. 00:04:05.997 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.997 EAL: Restoring previous memory policy: 4 00:04:05.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.997 EAL: request: mp_malloc_sync 00:04:05.997 EAL: No shared files mode enabled, IPC is disabled 00:04:05.997 EAL: Heap on socket 0 was expanded by 66MB 00:04:05.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.997 EAL: request: mp_malloc_sync 00:04:05.997 EAL: No shared files mode enabled, IPC is disabled 00:04:05.997 EAL: Heap on socket 0 was shrunk by 66MB 00:04:06.256 EAL: Trying to obtain current memory policy. 00:04:06.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.256 EAL: Restoring previous memory policy: 4 00:04:06.256 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.256 EAL: request: mp_malloc_sync 00:04:06.256 EAL: No shared files mode enabled, IPC is disabled 00:04:06.256 EAL: Heap on socket 0 was expanded by 130MB 00:04:06.256 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.515 EAL: request: mp_malloc_sync 00:04:06.515 EAL: No shared files mode enabled, IPC is disabled 00:04:06.515 EAL: Heap on socket 0 was shrunk by 130MB 00:04:06.515 EAL: Trying to obtain current memory policy. 00:04:06.515 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.773 EAL: Restoring previous memory policy: 4 00:04:06.773 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.773 EAL: request: mp_malloc_sync 00:04:06.773 EAL: No shared files mode enabled, IPC is disabled 00:04:06.773 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.032 EAL: request: mp_malloc_sync 00:04:07.032 EAL: No shared files mode enabled, IPC is disabled 00:04:07.032 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.600 EAL: Trying to obtain current memory policy. 00:04:07.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.600 EAL: Restoring previous memory policy: 4 00:04:07.600 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.600 EAL: request: mp_malloc_sync 00:04:07.600 EAL: No shared files mode enabled, IPC is disabled 00:04:07.600 EAL: Heap on socket 0 was expanded by 514MB 00:04:08.536 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.536 EAL: request: mp_malloc_sync 00:04:08.536 EAL: No shared files mode enabled, IPC is disabled 00:04:08.536 EAL: Heap on socket 0 was shrunk by 514MB 00:04:09.473 EAL: Trying to obtain current memory policy. 00:04:09.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.473 EAL: Restoring previous memory policy: 4 00:04:09.473 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.473 EAL: request: mp_malloc_sync 00:04:09.473 EAL: No shared files mode enabled, IPC is disabled 00:04:09.473 EAL: Heap on socket 0 was expanded by 1026MB 00:04:11.376 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.376 EAL: request: mp_malloc_sync 00:04:11.376 EAL: No shared files mode enabled, IPC is disabled 00:04:11.376 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:12.755 passed 00:04:12.755 00:04:12.755 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.756 suites 1 1 n/a 0 0 00:04:12.756 tests 2 2 2 0 0 00:04:12.756 asserts 5390 5390 5390 0 n/a 00:04:12.756 00:04:12.756 Elapsed time = 7.581 seconds 00:04:12.756 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.756 EAL: request: mp_malloc_sync 00:04:12.756 EAL: No shared files mode enabled, IPC is disabled 00:04:12.756 EAL: Heap on socket 0 was shrunk by 2MB 00:04:12.756 EAL: No shared files mode enabled, IPC is disabled 00:04:12.756 EAL: No shared files mode enabled, IPC is disabled 00:04:12.756 EAL: No shared files mode enabled, IPC is disabled 00:04:13.014 00:04:13.014 real 0m7.897s 00:04:13.014 user 0m6.716s 00:04:13.014 sys 0m1.016s 00:04:13.014 ************************************ 00:04:13.014 END TEST env_vtophys 00:04:13.014 ************************************ 00:04:13.014 11:15:28 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.014 11:15:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:13.014 11:15:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.014 11:15:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.014 11:15:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.014 11:15:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.014 ************************************ 00:04:13.014 START TEST env_pci 00:04:13.014 ************************************ 00:04:13.014 11:15:28 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.014 00:04:13.014 00:04:13.014 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.015 http://cunit.sourceforge.net/ 00:04:13.015 00:04:13.015 00:04:13.015 Suite: pci 00:04:13.015 Test: pci_hook ...[2024-07-25 11:15:28.723901] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58738 has claimed it 00:04:13.015 passed 00:04:13.015 00:04:13.015 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.015 suites 1 1 n/a 0 0 00:04:13.015 tests 1 1 1 0 0 00:04:13.015 asserts 25 25 25 0 n/a 00:04:13.015 00:04:13.015 Elapsed time = 0.008 seconds 00:04:13.015 EAL: Cannot find device (10000:00:01.0) 00:04:13.015 EAL: Failed to attach device on primary process 00:04:13.015 00:04:13.015 real 0m0.086s 00:04:13.015 user 0m0.037s 00:04:13.015 sys 0m0.048s 00:04:13.015 11:15:28 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.015 11:15:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:13.015 ************************************ 00:04:13.015 END TEST env_pci 00:04:13.015 ************************************ 00:04:13.015 11:15:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:13.015 11:15:28 env -- env/env.sh@15 -- # uname 00:04:13.015 11:15:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:13.015 11:15:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:13.015 11:15:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.015 11:15:28 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:13.015 11:15:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.015 11:15:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.015 ************************************ 00:04:13.015 START TEST env_dpdk_post_init 00:04:13.015 ************************************ 00:04:13.015 11:15:28 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.015 EAL: Detected CPU lcores: 10 00:04:13.015 EAL: Detected NUMA nodes: 1 00:04:13.015 EAL: Detected shared linkage of DPDK 00:04:13.273 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.273 EAL: Selected IOVA mode 'PA' 00:04:13.273 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.273 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:13.273 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:13.273 Starting DPDK initialization... 00:04:13.273 Starting SPDK post initialization... 00:04:13.273 SPDK NVMe probe 00:04:13.273 Attaching to 0000:00:10.0 00:04:13.273 Attaching to 0000:00:11.0 00:04:13.273 Attached to 0000:00:10.0 00:04:13.273 Attached to 0000:00:11.0 00:04:13.273 Cleaning up... 00:04:13.273 00:04:13.273 real 0m0.262s 00:04:13.273 user 0m0.089s 00:04:13.273 sys 0m0.072s 00:04:13.273 11:15:29 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.273 11:15:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.273 ************************************ 00:04:13.273 END TEST env_dpdk_post_init 00:04:13.273 ************************************ 00:04:13.273 11:15:29 env -- env/env.sh@26 -- # uname 00:04:13.273 11:15:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:13.273 11:15:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.273 11:15:29 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.273 11:15:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.274 11:15:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.274 ************************************ 00:04:13.274 START TEST env_mem_callbacks 00:04:13.274 ************************************ 00:04:13.274 11:15:29 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.532 EAL: Detected CPU lcores: 10 00:04:13.532 EAL: Detected NUMA nodes: 1 00:04:13.532 EAL: Detected shared linkage of DPDK 00:04:13.532 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.532 EAL: Selected IOVA mode 'PA' 00:04:13.532 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.532 00:04:13.532 00:04:13.532 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.532 http://cunit.sourceforge.net/ 00:04:13.532 00:04:13.532 00:04:13.532 Suite: memory 00:04:13.532 Test: test ... 00:04:13.532 register 0x200000200000 2097152 00:04:13.532 malloc 3145728 00:04:13.532 register 0x200000400000 4194304 00:04:13.532 buf 0x2000004fffc0 len 3145728 PASSED 00:04:13.532 malloc 64 00:04:13.532 buf 0x2000004ffec0 len 64 PASSED 00:04:13.532 malloc 4194304 00:04:13.532 register 0x200000800000 6291456 00:04:13.532 buf 0x2000009fffc0 len 4194304 PASSED 00:04:13.532 free 0x2000004fffc0 3145728 00:04:13.532 free 0x2000004ffec0 64 00:04:13.532 unregister 0x200000400000 4194304 PASSED 00:04:13.532 free 0x2000009fffc0 4194304 00:04:13.532 unregister 0x200000800000 6291456 PASSED 00:04:13.532 malloc 8388608 00:04:13.532 register 0x200000400000 10485760 00:04:13.532 buf 0x2000005fffc0 len 8388608 PASSED 00:04:13.532 free 0x2000005fffc0 8388608 00:04:13.532 unregister 0x200000400000 10485760 PASSED 00:04:13.791 passed 00:04:13.791 00:04:13.791 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.791 suites 1 1 n/a 0 0 00:04:13.791 tests 1 1 1 0 0 00:04:13.791 asserts 15 15 15 0 n/a 00:04:13.791 00:04:13.791 Elapsed time = 0.061 seconds 00:04:13.791 00:04:13.791 real 0m0.276s 00:04:13.791 user 0m0.098s 00:04:13.791 sys 0m0.076s 00:04:13.791 11:15:29 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.791 11:15:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.791 ************************************ 00:04:13.791 END TEST env_mem_callbacks 00:04:13.791 ************************************ 00:04:13.791 00:04:13.791 real 0m9.246s 00:04:13.791 user 0m7.401s 00:04:13.791 sys 0m1.456s 00:04:13.791 11:15:29 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.791 11:15:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.791 ************************************ 00:04:13.791 END TEST env 00:04:13.791 ************************************ 00:04:13.791 11:15:29 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.791 11:15:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.791 11:15:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.791 11:15:29 -- common/autotest_common.sh@10 -- # set +x 00:04:13.791 ************************************ 00:04:13.791 START TEST rpc 00:04:13.791 ************************************ 00:04:13.791 11:15:29 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.791 * Looking for test storage... 00:04:13.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.791 11:15:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58857 00:04:13.791 11:15:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:13.791 11:15:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.791 11:15:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58857 00:04:13.791 11:15:29 rpc -- common/autotest_common.sh@831 -- # '[' -z 58857 ']' 00:04:13.791 11:15:29 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.791 11:15:29 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:13.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.791 11:15:29 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.791 11:15:29 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:13.791 11:15:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.074 [2024-07-25 11:15:29.721439] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:04:14.074 [2024-07-25 11:15:29.721662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58857 ] 00:04:14.074 [2024-07-25 11:15:29.893789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.332 [2024-07-25 11:15:30.114586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:14.332 [2024-07-25 11:15:30.114676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58857' to capture a snapshot of events at runtime. 00:04:14.332 [2024-07-25 11:15:30.114725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:14.332 [2024-07-25 11:15:30.114738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:14.332 [2024-07-25 11:15:30.114754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58857 for offline analysis/debug. 00:04:14.332 [2024-07-25 11:15:30.114828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.266 11:15:30 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.267 11:15:30 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:15.267 11:15:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.267 11:15:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.267 11:15:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:15.267 11:15:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:15.267 11:15:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.267 11:15:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.267 11:15:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 ************************************ 00:04:15.267 START TEST rpc_integrity 00:04:15.267 ************************************ 00:04:15.267 11:15:30 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:15.267 11:15:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.267 11:15:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.267 11:15:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 11:15:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.267 11:15:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.267 11:15:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.267 11:15:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.267 11:15:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.267 11:15:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.267 11:15:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.267 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:15.267 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.267 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.267 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.267 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.267 { 00:04:15.267 "name": "Malloc0", 00:04:15.267 "aliases": [ 00:04:15.267 "5be92dcb-c163-4eb6-b5d9-7d2743405bea" 00:04:15.267 ], 00:04:15.267 "product_name": "Malloc disk", 00:04:15.267 "block_size": 512, 00:04:15.267 "num_blocks": 16384, 00:04:15.267 "uuid": "5be92dcb-c163-4eb6-b5d9-7d2743405bea", 00:04:15.267 "assigned_rate_limits": { 00:04:15.267 "rw_ios_per_sec": 0, 00:04:15.267 "rw_mbytes_per_sec": 0, 00:04:15.267 "r_mbytes_per_sec": 0, 00:04:15.267 "w_mbytes_per_sec": 0 00:04:15.267 }, 00:04:15.267 "claimed": false, 00:04:15.267 "zoned": false, 00:04:15.267 "supported_io_types": { 00:04:15.267 "read": true, 00:04:15.267 "write": true, 00:04:15.267 "unmap": true, 00:04:15.267 "flush": true, 00:04:15.267 "reset": true, 00:04:15.267 "nvme_admin": false, 00:04:15.267 "nvme_io": false, 00:04:15.267 "nvme_io_md": false, 00:04:15.267 "write_zeroes": true, 00:04:15.267 "zcopy": true, 00:04:15.267 "get_zone_info": false, 00:04:15.267 "zone_management": false, 00:04:15.267 "zone_append": false, 00:04:15.267 "compare": false, 00:04:15.267 "compare_and_write": false, 00:04:15.267 "abort": true, 00:04:15.267 "seek_hole": false, 00:04:15.267 "seek_data": false, 00:04:15.267 "copy": true, 00:04:15.267 "nvme_iov_md": false 00:04:15.267 }, 00:04:15.267 "memory_domains": [ 00:04:15.267 { 00:04:15.267 "dma_device_id": "system", 00:04:15.267 "dma_device_type": 1 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.267 "dma_device_type": 2 00:04:15.267 } 00:04:15.267 ], 00:04:15.267 "driver_specific": {} 00:04:15.267 } 00:04:15.267 ]' 00:04:15.267 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.267 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.267 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:15.267 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.267 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 [2024-07-25 11:15:31.083784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:15.267 [2024-07-25 11:15:31.083864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.267 [2024-07-25 11:15:31.083903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:15.267 [2024-07-25 11:15:31.083920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.267 [2024-07-25 11:15:31.086900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.267 [2024-07-25 11:15:31.086948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.267 Passthru0 00:04:15.267 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.267 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.267 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.267 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.267 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.267 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.267 { 00:04:15.267 "name": "Malloc0", 00:04:15.267 "aliases": [ 00:04:15.267 "5be92dcb-c163-4eb6-b5d9-7d2743405bea" 00:04:15.267 ], 00:04:15.267 "product_name": "Malloc disk", 00:04:15.267 "block_size": 512, 00:04:15.267 "num_blocks": 16384, 00:04:15.267 "uuid": "5be92dcb-c163-4eb6-b5d9-7d2743405bea", 00:04:15.267 "assigned_rate_limits": { 00:04:15.267 "rw_ios_per_sec": 0, 00:04:15.267 "rw_mbytes_per_sec": 0, 00:04:15.267 "r_mbytes_per_sec": 0, 00:04:15.267 "w_mbytes_per_sec": 0 00:04:15.267 }, 00:04:15.267 "claimed": true, 00:04:15.267 "claim_type": "exclusive_write", 00:04:15.267 "zoned": false, 00:04:15.267 "supported_io_types": { 00:04:15.267 "read": true, 00:04:15.267 "write": true, 00:04:15.267 "unmap": true, 00:04:15.267 "flush": true, 00:04:15.267 "reset": true, 00:04:15.267 "nvme_admin": false, 00:04:15.267 "nvme_io": false, 00:04:15.267 "nvme_io_md": false, 00:04:15.267 "write_zeroes": true, 00:04:15.267 "zcopy": true, 00:04:15.267 "get_zone_info": false, 00:04:15.267 "zone_management": false, 00:04:15.267 "zone_append": false, 00:04:15.267 "compare": false, 00:04:15.267 "compare_and_write": false, 00:04:15.267 "abort": true, 00:04:15.267 "seek_hole": false, 00:04:15.267 "seek_data": false, 00:04:15.267 "copy": true, 00:04:15.267 "nvme_iov_md": false 00:04:15.267 }, 00:04:15.267 "memory_domains": [ 00:04:15.267 { 00:04:15.267 "dma_device_id": "system", 00:04:15.267 "dma_device_type": 1 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.267 "dma_device_type": 2 00:04:15.267 } 00:04:15.267 ], 00:04:15.267 "driver_specific": {} 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "name": "Passthru0", 00:04:15.267 "aliases": [ 00:04:15.267 "4be9148e-392c-5663-9366-3195a9d279a3" 00:04:15.267 ], 00:04:15.267 "product_name": "passthru", 00:04:15.267 "block_size": 512, 00:04:15.267 "num_blocks": 16384, 00:04:15.267 "uuid": "4be9148e-392c-5663-9366-3195a9d279a3", 00:04:15.267 "assigned_rate_limits": { 00:04:15.267 "rw_ios_per_sec": 0, 00:04:15.267 "rw_mbytes_per_sec": 0, 00:04:15.267 "r_mbytes_per_sec": 0, 00:04:15.267 "w_mbytes_per_sec": 0 00:04:15.267 }, 00:04:15.267 "claimed": false, 00:04:15.267 "zoned": false, 00:04:15.267 "supported_io_types": { 00:04:15.267 "read": true, 00:04:15.267 "write": true, 00:04:15.267 "unmap": true, 00:04:15.267 "flush": true, 00:04:15.267 "reset": true, 00:04:15.267 "nvme_admin": false, 00:04:15.267 "nvme_io": false, 00:04:15.267 "nvme_io_md": false, 00:04:15.267 "write_zeroes": true, 00:04:15.267 "zcopy": true, 00:04:15.267 "get_zone_info": false, 00:04:15.267 "zone_management": false, 00:04:15.267 "zone_append": false, 00:04:15.267 "compare": false, 00:04:15.267 "compare_and_write": false, 00:04:15.267 "abort": true, 00:04:15.267 "seek_hole": false, 00:04:15.267 "seek_data": false, 00:04:15.267 "copy": true, 00:04:15.267 "nvme_iov_md": false 00:04:15.267 }, 00:04:15.267 "memory_domains": [ 00:04:15.267 { 00:04:15.267 "dma_device_id": "system", 00:04:15.267 "dma_device_type": 1 00:04:15.267 }, 00:04:15.267 { 00:04:15.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.267 "dma_device_type": 2 00:04:15.267 } 00:04:15.267 ], 00:04:15.267 "driver_specific": { 00:04:15.267 "passthru": { 00:04:15.267 "name": "Passthru0", 00:04:15.267 "base_bdev_name": "Malloc0" 00:04:15.267 } 00:04:15.267 } 00:04:15.267 } 00:04:15.267 ]' 00:04:15.267 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.525 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.525 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.525 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.525 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.525 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.525 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.525 11:15:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.525 00:04:15.525 real 0m0.336s 00:04:15.525 user 0m0.201s 00:04:15.525 sys 0m0.038s 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.525 11:15:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.525 ************************************ 00:04:15.525 END TEST rpc_integrity 00:04:15.525 ************************************ 00:04:15.525 11:15:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:15.525 11:15:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.525 11:15:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.525 11:15:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.525 ************************************ 00:04:15.525 START TEST rpc_plugins 00:04:15.525 ************************************ 00:04:15.525 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:15.525 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:15.525 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.525 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.525 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.525 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:15.525 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:15.525 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.525 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.525 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.525 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:15.525 { 00:04:15.525 "name": "Malloc1", 00:04:15.525 "aliases": [ 00:04:15.525 "c10f3ef3-2878-4007-ac55-2f568fa9e426" 00:04:15.525 ], 00:04:15.525 "product_name": "Malloc disk", 00:04:15.525 "block_size": 4096, 00:04:15.525 "num_blocks": 256, 00:04:15.525 "uuid": "c10f3ef3-2878-4007-ac55-2f568fa9e426", 00:04:15.525 "assigned_rate_limits": { 00:04:15.525 "rw_ios_per_sec": 0, 00:04:15.525 "rw_mbytes_per_sec": 0, 00:04:15.525 "r_mbytes_per_sec": 0, 00:04:15.525 "w_mbytes_per_sec": 0 00:04:15.525 }, 00:04:15.525 "claimed": false, 00:04:15.525 "zoned": false, 00:04:15.525 "supported_io_types": { 00:04:15.525 "read": true, 00:04:15.525 "write": true, 00:04:15.526 "unmap": true, 00:04:15.526 "flush": true, 00:04:15.526 "reset": true, 00:04:15.526 "nvme_admin": false, 00:04:15.526 "nvme_io": false, 00:04:15.526 "nvme_io_md": false, 00:04:15.526 "write_zeroes": true, 00:04:15.526 "zcopy": true, 00:04:15.526 "get_zone_info": false, 00:04:15.526 "zone_management": false, 00:04:15.526 "zone_append": false, 00:04:15.526 "compare": false, 00:04:15.526 "compare_and_write": false, 00:04:15.526 "abort": true, 00:04:15.526 "seek_hole": false, 00:04:15.526 "seek_data": false, 00:04:15.526 "copy": true, 00:04:15.526 "nvme_iov_md": false 00:04:15.526 }, 00:04:15.526 "memory_domains": [ 00:04:15.526 { 00:04:15.526 "dma_device_id": "system", 00:04:15.526 "dma_device_type": 1 00:04:15.526 }, 00:04:15.526 { 00:04:15.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.526 "dma_device_type": 2 00:04:15.526 } 00:04:15.526 ], 00:04:15.526 "driver_specific": {} 00:04:15.526 } 00:04:15.526 ]' 00:04:15.526 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:15.526 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:15.526 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:15.526 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.526 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.784 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.784 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:15.784 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.784 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.784 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.784 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:15.784 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:15.784 11:15:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:15.784 00:04:15.784 real 0m0.160s 00:04:15.784 user 0m0.102s 00:04:15.784 sys 0m0.020s 00:04:15.784 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.784 ************************************ 00:04:15.784 11:15:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.784 END TEST rpc_plugins 00:04:15.784 ************************************ 00:04:15.784 11:15:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:15.784 11:15:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.784 11:15:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.784 11:15:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.784 ************************************ 00:04:15.784 START TEST rpc_trace_cmd_test 00:04:15.784 ************************************ 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:15.784 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58857", 00:04:15.784 "tpoint_group_mask": "0x8", 00:04:15.784 "iscsi_conn": { 00:04:15.784 "mask": "0x2", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "scsi": { 00:04:15.784 "mask": "0x4", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "bdev": { 00:04:15.784 "mask": "0x8", 00:04:15.784 "tpoint_mask": "0xffffffffffffffff" 00:04:15.784 }, 00:04:15.784 "nvmf_rdma": { 00:04:15.784 "mask": "0x10", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "nvmf_tcp": { 00:04:15.784 "mask": "0x20", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "ftl": { 00:04:15.784 "mask": "0x40", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "blobfs": { 00:04:15.784 "mask": "0x80", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "dsa": { 00:04:15.784 "mask": "0x200", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "thread": { 00:04:15.784 "mask": "0x400", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "nvme_pcie": { 00:04:15.784 "mask": "0x800", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "iaa": { 00:04:15.784 "mask": "0x1000", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "nvme_tcp": { 00:04:15.784 "mask": "0x2000", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "bdev_nvme": { 00:04:15.784 "mask": "0x4000", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 }, 00:04:15.784 "sock": { 00:04:15.784 "mask": "0x8000", 00:04:15.784 "tpoint_mask": "0x0" 00:04:15.784 } 00:04:15.784 }' 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:15.784 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.041 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.041 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.041 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.041 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.041 11:15:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.041 00:04:16.041 real 0m0.289s 00:04:16.041 user 0m0.243s 00:04:16.041 sys 0m0.036s 00:04:16.041 11:15:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.041 11:15:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.041 ************************************ 00:04:16.041 END TEST rpc_trace_cmd_test 00:04:16.041 ************************************ 00:04:16.041 11:15:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:16.041 11:15:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:16.041 11:15:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:16.041 11:15:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.041 11:15:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.041 11:15:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.041 ************************************ 00:04:16.042 START TEST rpc_daemon_integrity 00:04:16.042 ************************************ 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.042 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.299 { 00:04:16.299 "name": "Malloc2", 00:04:16.299 "aliases": [ 00:04:16.299 "959d1aaf-da7b-4a18-bbc5-2b3be8b46f22" 00:04:16.299 ], 00:04:16.299 "product_name": "Malloc disk", 00:04:16.299 "block_size": 512, 00:04:16.299 "num_blocks": 16384, 00:04:16.299 "uuid": "959d1aaf-da7b-4a18-bbc5-2b3be8b46f22", 00:04:16.299 "assigned_rate_limits": { 00:04:16.299 "rw_ios_per_sec": 0, 00:04:16.299 "rw_mbytes_per_sec": 0, 00:04:16.299 "r_mbytes_per_sec": 0, 00:04:16.299 "w_mbytes_per_sec": 0 00:04:16.299 }, 00:04:16.299 "claimed": false, 00:04:16.299 "zoned": false, 00:04:16.299 "supported_io_types": { 00:04:16.299 "read": true, 00:04:16.299 "write": true, 00:04:16.299 "unmap": true, 00:04:16.299 "flush": true, 00:04:16.299 "reset": true, 00:04:16.299 "nvme_admin": false, 00:04:16.299 "nvme_io": false, 00:04:16.299 "nvme_io_md": false, 00:04:16.299 "write_zeroes": true, 00:04:16.299 "zcopy": true, 00:04:16.299 "get_zone_info": false, 00:04:16.299 "zone_management": false, 00:04:16.299 "zone_append": false, 00:04:16.299 "compare": false, 00:04:16.299 "compare_and_write": false, 00:04:16.299 "abort": true, 00:04:16.299 "seek_hole": false, 00:04:16.299 "seek_data": false, 00:04:16.299 "copy": true, 00:04:16.299 "nvme_iov_md": false 00:04:16.299 }, 00:04:16.299 "memory_domains": [ 00:04:16.299 { 00:04:16.299 "dma_device_id": "system", 00:04:16.299 "dma_device_type": 1 00:04:16.299 }, 00:04:16.299 { 00:04:16.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.299 "dma_device_type": 2 00:04:16.299 } 00:04:16.299 ], 00:04:16.299 "driver_specific": {} 00:04:16.299 } 00:04:16.299 ]' 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.299 11:15:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.299 [2024-07-25 11:15:32.004279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:16.299 [2024-07-25 11:15:32.004353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.299 [2024-07-25 11:15:32.004383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:16.299 [2024-07-25 11:15:32.004398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.299 [2024-07-25 11:15:32.007270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.299 [2024-07-25 11:15:32.007326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.299 Passthru0 00:04:16.299 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.299 11:15:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.299 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.299 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.299 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.299 11:15:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.299 { 00:04:16.299 "name": "Malloc2", 00:04:16.299 "aliases": [ 00:04:16.299 "959d1aaf-da7b-4a18-bbc5-2b3be8b46f22" 00:04:16.299 ], 00:04:16.299 "product_name": "Malloc disk", 00:04:16.299 "block_size": 512, 00:04:16.299 "num_blocks": 16384, 00:04:16.299 "uuid": "959d1aaf-da7b-4a18-bbc5-2b3be8b46f22", 00:04:16.299 "assigned_rate_limits": { 00:04:16.299 "rw_ios_per_sec": 0, 00:04:16.299 "rw_mbytes_per_sec": 0, 00:04:16.299 "r_mbytes_per_sec": 0, 00:04:16.299 "w_mbytes_per_sec": 0 00:04:16.299 }, 00:04:16.299 "claimed": true, 00:04:16.299 "claim_type": "exclusive_write", 00:04:16.299 "zoned": false, 00:04:16.299 "supported_io_types": { 00:04:16.299 "read": true, 00:04:16.299 "write": true, 00:04:16.299 "unmap": true, 00:04:16.299 "flush": true, 00:04:16.299 "reset": true, 00:04:16.299 "nvme_admin": false, 00:04:16.299 "nvme_io": false, 00:04:16.299 "nvme_io_md": false, 00:04:16.299 "write_zeroes": true, 00:04:16.299 "zcopy": true, 00:04:16.299 "get_zone_info": false, 00:04:16.299 "zone_management": false, 00:04:16.299 "zone_append": false, 00:04:16.300 "compare": false, 00:04:16.300 "compare_and_write": false, 00:04:16.300 "abort": true, 00:04:16.300 "seek_hole": false, 00:04:16.300 "seek_data": false, 00:04:16.300 "copy": true, 00:04:16.300 "nvme_iov_md": false 00:04:16.300 }, 00:04:16.300 "memory_domains": [ 00:04:16.300 { 00:04:16.300 "dma_device_id": "system", 00:04:16.300 "dma_device_type": 1 00:04:16.300 }, 00:04:16.300 { 00:04:16.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.300 "dma_device_type": 2 00:04:16.300 } 00:04:16.300 ], 00:04:16.300 "driver_specific": {} 00:04:16.300 }, 00:04:16.300 { 00:04:16.300 "name": "Passthru0", 00:04:16.300 "aliases": [ 00:04:16.300 "d7080d8e-1318-5518-880a-11def46aad2a" 00:04:16.300 ], 00:04:16.300 "product_name": "passthru", 00:04:16.300 "block_size": 512, 00:04:16.300 "num_blocks": 16384, 00:04:16.300 "uuid": "d7080d8e-1318-5518-880a-11def46aad2a", 00:04:16.300 "assigned_rate_limits": { 00:04:16.300 "rw_ios_per_sec": 0, 00:04:16.300 "rw_mbytes_per_sec": 0, 00:04:16.300 "r_mbytes_per_sec": 0, 00:04:16.300 "w_mbytes_per_sec": 0 00:04:16.300 }, 00:04:16.300 "claimed": false, 00:04:16.300 "zoned": false, 00:04:16.300 "supported_io_types": { 00:04:16.300 "read": true, 00:04:16.300 "write": true, 00:04:16.300 "unmap": true, 00:04:16.300 "flush": true, 00:04:16.300 "reset": true, 00:04:16.300 "nvme_admin": false, 00:04:16.300 "nvme_io": false, 00:04:16.300 "nvme_io_md": false, 00:04:16.300 "write_zeroes": true, 00:04:16.300 "zcopy": true, 00:04:16.300 "get_zone_info": false, 00:04:16.300 "zone_management": false, 00:04:16.300 "zone_append": false, 00:04:16.300 "compare": false, 00:04:16.300 "compare_and_write": false, 00:04:16.300 "abort": true, 00:04:16.300 "seek_hole": false, 00:04:16.300 "seek_data": false, 00:04:16.300 "copy": true, 00:04:16.300 "nvme_iov_md": false 00:04:16.300 }, 00:04:16.300 "memory_domains": [ 00:04:16.300 { 00:04:16.300 "dma_device_id": "system", 00:04:16.300 "dma_device_type": 1 00:04:16.300 }, 00:04:16.300 { 00:04:16.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.300 "dma_device_type": 2 00:04:16.300 } 00:04:16.300 ], 00:04:16.300 "driver_specific": { 00:04:16.300 "passthru": { 00:04:16.300 "name": "Passthru0", 00:04:16.300 "base_bdev_name": "Malloc2" 00:04:16.300 } 00:04:16.300 } 00:04:16.300 } 00:04:16.300 ]' 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.300 11:15:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.559 11:15:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.559 00:04:16.559 real 0m0.345s 00:04:16.559 user 0m0.227s 00:04:16.559 sys 0m0.029s 00:04:16.559 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.559 11:15:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.559 ************************************ 00:04:16.559 END TEST rpc_daemon_integrity 00:04:16.559 ************************************ 00:04:16.559 11:15:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:16.559 11:15:32 rpc -- rpc/rpc.sh@84 -- # killprocess 58857 00:04:16.559 11:15:32 rpc -- common/autotest_common.sh@950 -- # '[' -z 58857 ']' 00:04:16.559 11:15:32 rpc -- common/autotest_common.sh@954 -- # kill -0 58857 00:04:16.559 11:15:32 rpc -- common/autotest_common.sh@955 -- # uname 00:04:16.559 11:15:32 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.559 11:15:32 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58857 00:04:16.559 11:15:32 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.559 11:15:32 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.559 killing process with pid 58857 00:04:16.559 11:15:32 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58857' 00:04:16.559 11:15:32 rpc -- common/autotest_common.sh@969 -- # kill 58857 00:04:16.559 11:15:32 rpc -- common/autotest_common.sh@974 -- # wait 58857 00:04:19.128 00:04:19.128 real 0m5.033s 00:04:19.128 user 0m5.682s 00:04:19.128 sys 0m0.859s 00:04:19.128 11:15:34 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.128 11:15:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.128 ************************************ 00:04:19.128 END TEST rpc 00:04:19.128 ************************************ 00:04:19.128 11:15:34 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.128 11:15:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.128 11:15:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.128 11:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.128 ************************************ 00:04:19.128 START TEST skip_rpc 00:04:19.128 ************************************ 00:04:19.128 11:15:34 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.128 * Looking for test storage... 00:04:19.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.128 11:15:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.128 11:15:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:19.128 11:15:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:19.128 11:15:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.128 11:15:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.128 11:15:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.128 ************************************ 00:04:19.128 START TEST skip_rpc 00:04:19.128 ************************************ 00:04:19.128 11:15:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:19.128 11:15:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59078 00:04:19.128 11:15:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.128 11:15:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:19.128 11:15:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:19.128 [2024-07-25 11:15:34.817156] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:04:19.128 [2024-07-25 11:15:34.817364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59078 ] 00:04:19.128 [2024-07-25 11:15:34.992655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.386 [2024-07-25 11:15:35.238047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59078 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 59078 ']' 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 59078 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59078 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:24.654 killing process with pid 59078 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59078' 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 59078 00:04:24.654 11:15:39 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 59078 00:04:26.557 00:04:26.557 real 0m7.287s 00:04:26.557 user 0m6.740s 00:04:26.557 sys 0m0.433s 00:04:26.557 11:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.557 11:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.557 ************************************ 00:04:26.557 END TEST skip_rpc 00:04:26.557 ************************************ 00:04:26.557 11:15:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:26.557 11:15:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.557 11:15:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.557 11:15:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.557 ************************************ 00:04:26.557 START TEST skip_rpc_with_json 00:04:26.557 ************************************ 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59182 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59182 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59182 ']' 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.557 11:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.557 [2024-07-25 11:15:42.148899] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:04:26.557 [2024-07-25 11:15:42.149338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:04:26.557 [2024-07-25 11:15:42.318067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.816 [2024-07-25 11:15:42.566927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.750 [2024-07-25 11:15:43.403150] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:27.750 request: 00:04:27.750 { 00:04:27.750 "trtype": "tcp", 00:04:27.750 "method": "nvmf_get_transports", 00:04:27.750 "req_id": 1 00:04:27.750 } 00:04:27.750 Got JSON-RPC error response 00:04:27.750 response: 00:04:27.750 { 00:04:27.750 "code": -19, 00:04:27.750 "message": "No such device" 00:04:27.750 } 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.750 [2024-07-25 11:15:43.415283] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.750 11:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.750 { 00:04:27.750 "subsystems": [ 00:04:27.750 { 00:04:27.750 "subsystem": "keyring", 00:04:27.750 "config": [] 00:04:27.750 }, 00:04:27.750 { 00:04:27.750 "subsystem": "iobuf", 00:04:27.750 "config": [ 00:04:27.750 { 00:04:27.750 "method": "iobuf_set_options", 00:04:27.750 "params": { 00:04:27.750 "small_pool_count": 8192, 00:04:27.750 "large_pool_count": 1024, 00:04:27.750 "small_bufsize": 8192, 00:04:27.750 "large_bufsize": 135168 00:04:27.750 } 00:04:27.750 } 00:04:27.750 ] 00:04:27.750 }, 00:04:27.750 { 00:04:27.750 "subsystem": "sock", 00:04:27.750 "config": [ 00:04:27.750 { 00:04:27.750 "method": "sock_set_default_impl", 00:04:27.750 "params": { 00:04:27.750 "impl_name": "posix" 00:04:27.750 } 00:04:27.750 }, 00:04:27.750 { 00:04:27.750 "method": "sock_impl_set_options", 00:04:27.750 "params": { 00:04:27.750 "impl_name": "ssl", 00:04:27.750 "recv_buf_size": 4096, 00:04:27.750 "send_buf_size": 4096, 00:04:27.750 "enable_recv_pipe": true, 00:04:27.750 "enable_quickack": false, 00:04:27.750 "enable_placement_id": 0, 00:04:27.750 "enable_zerocopy_send_server": true, 00:04:27.750 "enable_zerocopy_send_client": false, 00:04:27.750 "zerocopy_threshold": 0, 00:04:27.750 "tls_version": 0, 00:04:27.750 "enable_ktls": false 00:04:27.750 } 00:04:27.750 }, 00:04:27.750 { 00:04:27.750 "method": "sock_impl_set_options", 00:04:27.750 "params": { 00:04:27.750 "impl_name": "posix", 00:04:27.750 "recv_buf_size": 2097152, 00:04:27.750 "send_buf_size": 2097152, 00:04:27.750 "enable_recv_pipe": true, 00:04:27.750 "enable_quickack": false, 00:04:27.750 "enable_placement_id": 0, 00:04:27.750 "enable_zerocopy_send_server": true, 00:04:27.750 "enable_zerocopy_send_client": false, 00:04:27.750 "zerocopy_threshold": 0, 00:04:27.750 "tls_version": 0, 00:04:27.750 "enable_ktls": false 00:04:27.750 } 00:04:27.750 } 00:04:27.750 ] 00:04:27.750 }, 00:04:27.750 { 00:04:27.750 "subsystem": "vmd", 00:04:27.750 "config": [] 00:04:27.750 }, 00:04:27.750 { 00:04:27.750 "subsystem": "accel", 00:04:27.750 "config": [ 00:04:27.750 { 00:04:27.750 "method": "accel_set_options", 00:04:27.750 "params": { 00:04:27.750 "small_cache_size": 128, 00:04:27.750 "large_cache_size": 16, 00:04:27.750 "task_count": 2048, 00:04:27.750 "sequence_count": 2048, 00:04:27.750 "buf_count": 2048 00:04:27.750 } 00:04:27.750 } 00:04:27.750 ] 00:04:27.750 }, 00:04:27.750 { 00:04:27.750 "subsystem": "bdev", 00:04:27.750 "config": [ 00:04:27.750 { 00:04:27.750 "method": "bdev_set_options", 00:04:27.750 "params": { 00:04:27.750 "bdev_io_pool_size": 65535, 00:04:27.750 "bdev_io_cache_size": 256, 00:04:27.750 "bdev_auto_examine": true, 00:04:27.750 "iobuf_small_cache_size": 128, 00:04:27.750 "iobuf_large_cache_size": 16 00:04:27.750 } 00:04:27.750 }, 00:04:27.750 { 00:04:27.750 "method": "bdev_raid_set_options", 00:04:27.750 "params": { 00:04:27.750 "process_window_size_kb": 1024, 00:04:27.750 "process_max_bandwidth_mb_sec": 0 00:04:27.750 } 00:04:27.750 }, 00:04:27.750 { 00:04:27.750 "method": "bdev_iscsi_set_options", 00:04:27.750 "params": { 00:04:27.750 "timeout_sec": 30 00:04:27.750 } 00:04:27.750 }, 00:04:27.750 { 00:04:27.750 "method": "bdev_nvme_set_options", 00:04:27.750 "params": { 00:04:27.750 "action_on_timeout": "none", 00:04:27.750 "timeout_us": 0, 00:04:27.750 "timeout_admin_us": 0, 00:04:27.750 "keep_alive_timeout_ms": 10000, 00:04:27.750 "arbitration_burst": 0, 00:04:27.750 "low_priority_weight": 0, 00:04:27.750 "medium_priority_weight": 0, 00:04:27.750 "high_priority_weight": 0, 00:04:27.750 "nvme_adminq_poll_period_us": 10000, 00:04:27.750 "nvme_ioq_poll_period_us": 0, 00:04:27.750 "io_queue_requests": 0, 00:04:27.750 "delay_cmd_submit": true, 00:04:27.750 "transport_retry_count": 4, 00:04:27.750 "bdev_retry_count": 3, 00:04:27.750 "transport_ack_timeout": 0, 00:04:27.750 "ctrlr_loss_timeout_sec": 0, 00:04:27.750 "reconnect_delay_sec": 0, 00:04:27.750 "fast_io_fail_timeout_sec": 0, 00:04:27.750 "disable_auto_failback": false, 00:04:27.750 "generate_uuids": false, 00:04:27.751 "transport_tos": 0, 00:04:27.751 "nvme_error_stat": false, 00:04:27.751 "rdma_srq_size": 0, 00:04:27.751 "io_path_stat": false, 00:04:27.751 "allow_accel_sequence": false, 00:04:27.751 "rdma_max_cq_size": 0, 00:04:27.751 "rdma_cm_event_timeout_ms": 0, 00:04:27.751 "dhchap_digests": [ 00:04:27.751 "sha256", 00:04:27.751 "sha384", 00:04:27.751 "sha512" 00:04:27.751 ], 00:04:27.751 "dhchap_dhgroups": [ 00:04:27.751 "null", 00:04:27.751 "ffdhe2048", 00:04:27.751 "ffdhe3072", 00:04:27.751 "ffdhe4096", 00:04:27.751 "ffdhe6144", 00:04:27.751 "ffdhe8192" 00:04:27.751 ] 00:04:27.751 } 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "method": "bdev_nvme_set_hotplug", 00:04:27.751 "params": { 00:04:27.751 "period_us": 100000, 00:04:27.751 "enable": false 00:04:27.751 } 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "method": "bdev_wait_for_examine" 00:04:27.751 } 00:04:27.751 ] 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "subsystem": "scsi", 00:04:27.751 "config": null 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "subsystem": "scheduler", 00:04:27.751 "config": [ 00:04:27.751 { 00:04:27.751 "method": "framework_set_scheduler", 00:04:27.751 "params": { 00:04:27.751 "name": "static" 00:04:27.751 } 00:04:27.751 } 00:04:27.751 ] 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "subsystem": "vhost_scsi", 00:04:27.751 "config": [] 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "subsystem": "vhost_blk", 00:04:27.751 "config": [] 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "subsystem": "ublk", 00:04:27.751 "config": [] 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "subsystem": "nbd", 00:04:27.751 "config": [] 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "subsystem": "nvmf", 00:04:27.751 "config": [ 00:04:27.751 { 00:04:27.751 "method": "nvmf_set_config", 00:04:27.751 "params": { 00:04:27.751 "discovery_filter": "match_any", 00:04:27.751 "admin_cmd_passthru": { 00:04:27.751 "identify_ctrlr": false 00:04:27.751 } 00:04:27.751 } 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "method": "nvmf_set_max_subsystems", 00:04:27.751 "params": { 00:04:27.751 "max_subsystems": 1024 00:04:27.751 } 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "method": "nvmf_set_crdt", 00:04:27.751 "params": { 00:04:27.751 "crdt1": 0, 00:04:27.751 "crdt2": 0, 00:04:27.751 "crdt3": 0 00:04:27.751 } 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "method": "nvmf_create_transport", 00:04:27.751 "params": { 00:04:27.751 "trtype": "TCP", 00:04:27.751 "max_queue_depth": 128, 00:04:27.751 "max_io_qpairs_per_ctrlr": 127, 00:04:27.751 "in_capsule_data_size": 4096, 00:04:27.751 "max_io_size": 131072, 00:04:27.751 "io_unit_size": 131072, 00:04:27.751 "max_aq_depth": 128, 00:04:27.751 "num_shared_buffers": 511, 00:04:27.751 "buf_cache_size": 4294967295, 00:04:27.751 "dif_insert_or_strip": false, 00:04:27.751 "zcopy": false, 00:04:27.751 "c2h_success": true, 00:04:27.751 "sock_priority": 0, 00:04:27.751 "abort_timeout_sec": 1, 00:04:27.751 "ack_timeout": 0, 00:04:27.751 "data_wr_pool_size": 0 00:04:27.751 } 00:04:27.751 } 00:04:27.751 ] 00:04:27.751 }, 00:04:27.751 { 00:04:27.751 "subsystem": "iscsi", 00:04:27.751 "config": [ 00:04:27.751 { 00:04:27.751 "method": "iscsi_set_options", 00:04:27.751 "params": { 00:04:27.751 "node_base": "iqn.2016-06.io.spdk", 00:04:27.751 "max_sessions": 128, 00:04:27.751 "max_connections_per_session": 2, 00:04:27.751 "max_queue_depth": 64, 00:04:27.751 "default_time2wait": 2, 00:04:27.751 "default_time2retain": 20, 00:04:27.751 "first_burst_length": 8192, 00:04:27.751 "immediate_data": true, 00:04:27.751 "allow_duplicated_isid": false, 00:04:27.751 "error_recovery_level": 0, 00:04:27.751 "nop_timeout": 60, 00:04:27.751 "nop_in_interval": 30, 00:04:27.751 "disable_chap": false, 00:04:27.751 "require_chap": false, 00:04:27.751 "mutual_chap": false, 00:04:27.751 "chap_group": 0, 00:04:27.751 "max_large_datain_per_connection": 64, 00:04:27.751 "max_r2t_per_connection": 4, 00:04:27.751 "pdu_pool_size": 36864, 00:04:27.751 "immediate_data_pool_size": 16384, 00:04:27.751 "data_out_pool_size": 2048 00:04:27.751 } 00:04:27.751 } 00:04:27.751 ] 00:04:27.751 } 00:04:27.751 ] 00:04:27.751 } 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59182 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59182 ']' 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59182 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59182 00:04:27.751 killing process with pid 59182 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59182' 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59182 00:04:27.751 11:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59182 00:04:30.282 11:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59238 00:04:30.282 11:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:30.282 11:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59238 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59238 ']' 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59238 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59238 00:04:35.545 killing process with pid 59238 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59238' 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59238 00:04:35.545 11:15:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59238 00:04:37.446 11:15:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.446 11:15:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.446 ************************************ 00:04:37.446 END TEST skip_rpc_with_json 00:04:37.446 ************************************ 00:04:37.446 00:04:37.446 real 0m11.209s 00:04:37.446 user 0m10.587s 00:04:37.446 sys 0m0.979s 00:04:37.446 11:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.446 11:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.446 11:15:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:37.446 11:15:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.446 11:15:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.446 11:15:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.446 ************************************ 00:04:37.446 START TEST skip_rpc_with_delay 00:04:37.446 ************************************ 00:04:37.446 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:37.447 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.704 [2024-07-25 11:15:53.405031] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:37.704 [2024-07-25 11:15:53.405262] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:37.704 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:37.704 ************************************ 00:04:37.704 END TEST skip_rpc_with_delay 00:04:37.704 ************************************ 00:04:37.704 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:37.704 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:37.704 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:37.704 00:04:37.704 real 0m0.185s 00:04:37.704 user 0m0.104s 00:04:37.704 sys 0m0.078s 00:04:37.704 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.704 11:15:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 11:15:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:37.704 11:15:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:37.704 11:15:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:37.704 11:15:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.704 11:15:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.704 11:15:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 ************************************ 00:04:37.704 START TEST exit_on_failed_rpc_init 00:04:37.704 ************************************ 00:04:37.704 11:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:37.704 11:15:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59366 00:04:37.704 11:15:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.704 11:15:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59366 00:04:37.704 11:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59366 ']' 00:04:37.704 11:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.704 11:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.704 11:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.704 11:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.704 11:15:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.962 [2024-07-25 11:15:53.662223] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:04:37.962 [2024-07-25 11:15:53.662458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59366 ] 00:04:38.220 [2024-07-25 11:15:53.845052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.479 [2024-07-25 11:15:54.119371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.413 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.413 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:39.413 11:15:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.413 11:15:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.413 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:39.413 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.413 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.414 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.414 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.414 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.414 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.414 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.414 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.414 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.414 11:15:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.414 [2024-07-25 11:15:55.092792] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:04:39.414 [2024-07-25 11:15:55.092995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59390 ] 00:04:39.414 [2024-07-25 11:15:55.273190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.981 [2024-07-25 11:15:55.566118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.981 [2024-07-25 11:15:55.566271] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.981 [2024-07-25 11:15:55.566303] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.981 [2024-07-25 11:15:55.566335] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59366 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59366 ']' 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59366 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59366 00:04:40.240 killing process with pid 59366 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59366' 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59366 00:04:40.240 11:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59366 00:04:42.774 00:04:42.774 real 0m4.769s 00:04:42.774 user 0m5.496s 00:04:42.774 sys 0m0.756s 00:04:42.774 ************************************ 00:04:42.774 END TEST exit_on_failed_rpc_init 00:04:42.774 ************************************ 00:04:42.774 11:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.774 11:15:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.774 11:15:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.774 ************************************ 00:04:42.774 END TEST skip_rpc 00:04:42.774 ************************************ 00:04:42.774 00:04:42.774 real 0m23.747s 00:04:42.774 user 0m23.018s 00:04:42.774 sys 0m2.440s 00:04:42.774 11:15:58 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.774 11:15:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.774 11:15:58 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:42.774 11:15:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.774 11:15:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.774 11:15:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.774 ************************************ 00:04:42.774 START TEST rpc_client 00:04:42.774 ************************************ 00:04:42.774 11:15:58 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:42.774 * Looking for test storage... 00:04:42.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:42.774 11:15:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:42.774 OK 00:04:42.774 11:15:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.774 00:04:42.774 real 0m0.149s 00:04:42.774 user 0m0.058s 00:04:42.774 sys 0m0.095s 00:04:42.774 11:15:58 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.774 11:15:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:42.774 ************************************ 00:04:42.774 END TEST rpc_client 00:04:42.774 ************************************ 00:04:42.774 11:15:58 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:42.774 11:15:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.774 11:15:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.774 11:15:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.774 ************************************ 00:04:42.774 START TEST json_config 00:04:42.774 ************************************ 00:04:42.774 11:15:58 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:42.774 11:15:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.774 11:15:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:05a31262-bec0-4fe6-8d87-4b5d66f447c5 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=05a31262-bec0-4fe6-8d87-4b5d66f447c5 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.038 11:15:58 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.038 11:15:58 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.038 11:15:58 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.038 11:15:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.038 11:15:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.038 11:15:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.038 11:15:58 json_config -- paths/export.sh@5 -- # export PATH 00:04:43.038 11:15:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@47 -- # : 0 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:43.038 11:15:58 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:43.038 11:15:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:43.038 WARNING: No tests are enabled so not running JSON configuration tests 00:04:43.038 11:15:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:43.038 11:15:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:43.038 11:15:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:43.038 11:15:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:43.038 11:15:58 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:43.039 11:15:58 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:43.039 00:04:43.039 real 0m0.085s 00:04:43.039 user 0m0.039s 00:04:43.039 sys 0m0.043s 00:04:43.039 ************************************ 00:04:43.039 END TEST json_config 00:04:43.039 ************************************ 00:04:43.039 11:15:58 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.039 11:15:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.039 11:15:58 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.039 11:15:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.039 11:15:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.039 11:15:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.039 ************************************ 00:04:43.039 START TEST json_config_extra_key 00:04:43.039 ************************************ 00:04:43.039 11:15:58 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:05a31262-bec0-4fe6-8d87-4b5d66f447c5 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=05a31262-bec0-4fe6-8d87-4b5d66f447c5 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.039 11:15:58 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.039 11:15:58 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.039 11:15:58 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.039 11:15:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.039 11:15:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.039 11:15:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.039 11:15:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:43.039 11:15:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:43.039 11:15:58 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:43.039 INFO: launching applications... 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:43.039 11:15:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59571 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:43.039 Waiting for target to run... 00:04:43.039 11:15:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59571 /var/tmp/spdk_tgt.sock 00:04:43.039 11:15:58 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59571 ']' 00:04:43.039 11:15:58 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.039 11:15:58 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.039 11:15:58 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.039 11:15:58 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.039 11:15:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.298 [2024-07-25 11:15:58.926586] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:04:43.298 [2024-07-25 11:15:58.927184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59571 ] 00:04:43.572 [2024-07-25 11:15:59.398999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.831 [2024-07-25 11:15:59.652663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.767 11:16:00 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.767 00:04:44.767 INFO: shutting down applications... 00:04:44.767 11:16:00 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:44.767 11:16:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:44.767 11:16:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:44.767 11:16:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:44.767 11:16:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:44.767 11:16:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:44.767 11:16:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59571 ]] 00:04:44.767 11:16:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59571 00:04:44.767 11:16:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:44.767 11:16:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.767 11:16:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59571 00:04:44.767 11:16:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.026 11:16:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.026 11:16:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.026 11:16:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59571 00:04:45.026 11:16:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.594 11:16:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.594 11:16:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.594 11:16:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59571 00:04:45.594 11:16:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.160 11:16:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.160 11:16:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.160 11:16:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59571 00:04:46.160 11:16:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.727 11:16:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.727 11:16:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.728 11:16:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59571 00:04:46.728 11:16:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.986 11:16:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.986 11:16:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.986 11:16:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59571 00:04:46.986 11:16:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.555 11:16:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.555 11:16:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.555 11:16:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59571 00:04:47.555 11:16:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.555 11:16:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.555 SPDK target shutdown done 00:04:47.555 11:16:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.555 11:16:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.555 Success 00:04:47.555 11:16:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.555 ************************************ 00:04:47.555 END TEST json_config_extra_key 00:04:47.555 ************************************ 00:04:47.555 00:04:47.555 real 0m4.601s 00:04:47.555 user 0m4.005s 00:04:47.555 sys 0m0.629s 00:04:47.555 11:16:03 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.555 11:16:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.555 11:16:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.555 11:16:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.555 11:16:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.555 11:16:03 -- common/autotest_common.sh@10 -- # set +x 00:04:47.555 ************************************ 00:04:47.555 START TEST alias_rpc 00:04:47.555 ************************************ 00:04:47.555 11:16:03 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.875 * Looking for test storage... 00:04:47.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:47.875 11:16:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:47.875 11:16:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59674 00:04:47.875 11:16:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59674 00:04:47.875 11:16:03 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59674 ']' 00:04:47.875 11:16:03 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.875 11:16:03 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.875 11:16:03 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.875 11:16:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.875 11:16:03 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.875 11:16:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.875 [2024-07-25 11:16:03.577896] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:04:47.875 [2024-07-25 11:16:03.578093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59674 ] 00:04:48.133 [2024-07-25 11:16:03.754390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.134 [2024-07-25 11:16:03.987316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.071 11:16:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.071 11:16:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:49.071 11:16:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:49.329 11:16:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59674 00:04:49.329 11:16:05 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59674 ']' 00:04:49.329 11:16:05 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59674 00:04:49.329 11:16:05 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:49.329 11:16:05 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.329 11:16:05 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59674 00:04:49.329 killing process with pid 59674 00:04:49.329 11:16:05 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.329 11:16:05 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.329 11:16:05 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59674' 00:04:49.329 11:16:05 alias_rpc -- common/autotest_common.sh@969 -- # kill 59674 00:04:49.329 11:16:05 alias_rpc -- common/autotest_common.sh@974 -- # wait 59674 00:04:51.863 ************************************ 00:04:51.863 END TEST alias_rpc 00:04:51.863 ************************************ 00:04:51.863 00:04:51.863 real 0m3.927s 00:04:51.863 user 0m4.004s 00:04:51.864 sys 0m0.591s 00:04:51.864 11:16:07 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.864 11:16:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.864 11:16:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:51.864 11:16:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:51.864 11:16:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.864 11:16:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.864 11:16:07 -- common/autotest_common.sh@10 -- # set +x 00:04:51.864 ************************************ 00:04:51.864 START TEST spdkcli_tcp 00:04:51.864 ************************************ 00:04:51.864 11:16:07 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:51.864 * Looking for test storage... 00:04:51.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:51.864 11:16:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:51.864 11:16:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:51.864 11:16:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:51.864 11:16:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:51.864 11:16:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:51.864 11:16:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:51.864 11:16:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:51.864 11:16:07 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.864 11:16:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.864 11:16:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59773 00:04:51.864 11:16:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:51.864 11:16:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59773 00:04:51.864 11:16:07 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59773 ']' 00:04:51.864 11:16:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.864 11:16:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.864 11:16:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.864 11:16:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.864 11:16:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.864 [2024-07-25 11:16:07.601001] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:04:51.864 [2024-07-25 11:16:07.601229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59773 ] 00:04:52.123 [2024-07-25 11:16:07.776527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.382 [2024-07-25 11:16:08.057382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.382 [2024-07-25 11:16:08.057392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.330 11:16:08 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.330 11:16:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:53.330 11:16:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59795 00:04:53.330 11:16:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:53.330 11:16:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:53.330 [ 00:04:53.330 "bdev_malloc_delete", 00:04:53.330 "bdev_malloc_create", 00:04:53.330 "bdev_null_resize", 00:04:53.330 "bdev_null_delete", 00:04:53.330 "bdev_null_create", 00:04:53.330 "bdev_nvme_cuse_unregister", 00:04:53.330 "bdev_nvme_cuse_register", 00:04:53.330 "bdev_opal_new_user", 00:04:53.330 "bdev_opal_set_lock_state", 00:04:53.330 "bdev_opal_delete", 00:04:53.330 "bdev_opal_get_info", 00:04:53.330 "bdev_opal_create", 00:04:53.330 "bdev_nvme_opal_revert", 00:04:53.330 "bdev_nvme_opal_init", 00:04:53.330 "bdev_nvme_send_cmd", 00:04:53.330 "bdev_nvme_get_path_iostat", 00:04:53.330 "bdev_nvme_get_mdns_discovery_info", 00:04:53.330 "bdev_nvme_stop_mdns_discovery", 00:04:53.330 "bdev_nvme_start_mdns_discovery", 00:04:53.330 "bdev_nvme_set_multipath_policy", 00:04:53.330 "bdev_nvme_set_preferred_path", 00:04:53.330 "bdev_nvme_get_io_paths", 00:04:53.330 "bdev_nvme_remove_error_injection", 00:04:53.330 "bdev_nvme_add_error_injection", 00:04:53.330 "bdev_nvme_get_discovery_info", 00:04:53.330 "bdev_nvme_stop_discovery", 00:04:53.330 "bdev_nvme_start_discovery", 00:04:53.330 "bdev_nvme_get_controller_health_info", 00:04:53.330 "bdev_nvme_disable_controller", 00:04:53.330 "bdev_nvme_enable_controller", 00:04:53.330 "bdev_nvme_reset_controller", 00:04:53.330 "bdev_nvme_get_transport_statistics", 00:04:53.330 "bdev_nvme_apply_firmware", 00:04:53.330 "bdev_nvme_detach_controller", 00:04:53.330 "bdev_nvme_get_controllers", 00:04:53.330 "bdev_nvme_attach_controller", 00:04:53.330 "bdev_nvme_set_hotplug", 00:04:53.330 "bdev_nvme_set_options", 00:04:53.330 "bdev_passthru_delete", 00:04:53.330 "bdev_passthru_create", 00:04:53.330 "bdev_lvol_set_parent_bdev", 00:04:53.330 "bdev_lvol_set_parent", 00:04:53.330 "bdev_lvol_check_shallow_copy", 00:04:53.330 "bdev_lvol_start_shallow_copy", 00:04:53.330 "bdev_lvol_grow_lvstore", 00:04:53.330 "bdev_lvol_get_lvols", 00:04:53.330 "bdev_lvol_get_lvstores", 00:04:53.330 "bdev_lvol_delete", 00:04:53.330 "bdev_lvol_set_read_only", 00:04:53.330 "bdev_lvol_resize", 00:04:53.330 "bdev_lvol_decouple_parent", 00:04:53.330 "bdev_lvol_inflate", 00:04:53.330 "bdev_lvol_rename", 00:04:53.330 "bdev_lvol_clone_bdev", 00:04:53.330 "bdev_lvol_clone", 00:04:53.330 "bdev_lvol_snapshot", 00:04:53.330 "bdev_lvol_create", 00:04:53.330 "bdev_lvol_delete_lvstore", 00:04:53.330 "bdev_lvol_rename_lvstore", 00:04:53.330 "bdev_lvol_create_lvstore", 00:04:53.330 "bdev_raid_set_options", 00:04:53.330 "bdev_raid_remove_base_bdev", 00:04:53.330 "bdev_raid_add_base_bdev", 00:04:53.330 "bdev_raid_delete", 00:04:53.330 "bdev_raid_create", 00:04:53.330 "bdev_raid_get_bdevs", 00:04:53.330 "bdev_error_inject_error", 00:04:53.330 "bdev_error_delete", 00:04:53.330 "bdev_error_create", 00:04:53.330 "bdev_split_delete", 00:04:53.330 "bdev_split_create", 00:04:53.330 "bdev_delay_delete", 00:04:53.330 "bdev_delay_create", 00:04:53.330 "bdev_delay_update_latency", 00:04:53.331 "bdev_zone_block_delete", 00:04:53.331 "bdev_zone_block_create", 00:04:53.331 "blobfs_create", 00:04:53.331 "blobfs_detect", 00:04:53.331 "blobfs_set_cache_size", 00:04:53.331 "bdev_aio_delete", 00:04:53.331 "bdev_aio_rescan", 00:04:53.331 "bdev_aio_create", 00:04:53.331 "bdev_ftl_set_property", 00:04:53.331 "bdev_ftl_get_properties", 00:04:53.331 "bdev_ftl_get_stats", 00:04:53.331 "bdev_ftl_unmap", 00:04:53.331 "bdev_ftl_unload", 00:04:53.331 "bdev_ftl_delete", 00:04:53.331 "bdev_ftl_load", 00:04:53.331 "bdev_ftl_create", 00:04:53.331 "bdev_virtio_attach_controller", 00:04:53.331 "bdev_virtio_scsi_get_devices", 00:04:53.331 "bdev_virtio_detach_controller", 00:04:53.331 "bdev_virtio_blk_set_hotplug", 00:04:53.331 "bdev_iscsi_delete", 00:04:53.331 "bdev_iscsi_create", 00:04:53.331 "bdev_iscsi_set_options", 00:04:53.331 "accel_error_inject_error", 00:04:53.331 "ioat_scan_accel_module", 00:04:53.331 "dsa_scan_accel_module", 00:04:53.331 "iaa_scan_accel_module", 00:04:53.331 "keyring_file_remove_key", 00:04:53.331 "keyring_file_add_key", 00:04:53.331 "keyring_linux_set_options", 00:04:53.331 "iscsi_get_histogram", 00:04:53.331 "iscsi_enable_histogram", 00:04:53.331 "iscsi_set_options", 00:04:53.331 "iscsi_get_auth_groups", 00:04:53.331 "iscsi_auth_group_remove_secret", 00:04:53.331 "iscsi_auth_group_add_secret", 00:04:53.331 "iscsi_delete_auth_group", 00:04:53.331 "iscsi_create_auth_group", 00:04:53.331 "iscsi_set_discovery_auth", 00:04:53.331 "iscsi_get_options", 00:04:53.331 "iscsi_target_node_request_logout", 00:04:53.331 "iscsi_target_node_set_redirect", 00:04:53.331 "iscsi_target_node_set_auth", 00:04:53.331 "iscsi_target_node_add_lun", 00:04:53.331 "iscsi_get_stats", 00:04:53.331 "iscsi_get_connections", 00:04:53.331 "iscsi_portal_group_set_auth", 00:04:53.331 "iscsi_start_portal_group", 00:04:53.331 "iscsi_delete_portal_group", 00:04:53.331 "iscsi_create_portal_group", 00:04:53.331 "iscsi_get_portal_groups", 00:04:53.331 "iscsi_delete_target_node", 00:04:53.331 "iscsi_target_node_remove_pg_ig_maps", 00:04:53.331 "iscsi_target_node_add_pg_ig_maps", 00:04:53.331 "iscsi_create_target_node", 00:04:53.331 "iscsi_get_target_nodes", 00:04:53.331 "iscsi_delete_initiator_group", 00:04:53.331 "iscsi_initiator_group_remove_initiators", 00:04:53.331 "iscsi_initiator_group_add_initiators", 00:04:53.331 "iscsi_create_initiator_group", 00:04:53.331 "iscsi_get_initiator_groups", 00:04:53.331 "nvmf_set_crdt", 00:04:53.331 "nvmf_set_config", 00:04:53.331 "nvmf_set_max_subsystems", 00:04:53.331 "nvmf_stop_mdns_prr", 00:04:53.331 "nvmf_publish_mdns_prr", 00:04:53.331 "nvmf_subsystem_get_listeners", 00:04:53.331 "nvmf_subsystem_get_qpairs", 00:04:53.331 "nvmf_subsystem_get_controllers", 00:04:53.331 "nvmf_get_stats", 00:04:53.331 "nvmf_get_transports", 00:04:53.331 "nvmf_create_transport", 00:04:53.331 "nvmf_get_targets", 00:04:53.331 "nvmf_delete_target", 00:04:53.331 "nvmf_create_target", 00:04:53.331 "nvmf_subsystem_allow_any_host", 00:04:53.331 "nvmf_subsystem_remove_host", 00:04:53.331 "nvmf_subsystem_add_host", 00:04:53.331 "nvmf_ns_remove_host", 00:04:53.331 "nvmf_ns_add_host", 00:04:53.331 "nvmf_subsystem_remove_ns", 00:04:53.331 "nvmf_subsystem_add_ns", 00:04:53.331 "nvmf_subsystem_listener_set_ana_state", 00:04:53.331 "nvmf_discovery_get_referrals", 00:04:53.331 "nvmf_discovery_remove_referral", 00:04:53.331 "nvmf_discovery_add_referral", 00:04:53.331 "nvmf_subsystem_remove_listener", 00:04:53.331 "nvmf_subsystem_add_listener", 00:04:53.331 "nvmf_delete_subsystem", 00:04:53.331 "nvmf_create_subsystem", 00:04:53.331 "nvmf_get_subsystems", 00:04:53.331 "env_dpdk_get_mem_stats", 00:04:53.331 "nbd_get_disks", 00:04:53.331 "nbd_stop_disk", 00:04:53.331 "nbd_start_disk", 00:04:53.331 "ublk_recover_disk", 00:04:53.331 "ublk_get_disks", 00:04:53.331 "ublk_stop_disk", 00:04:53.331 "ublk_start_disk", 00:04:53.331 "ublk_destroy_target", 00:04:53.331 "ublk_create_target", 00:04:53.331 "virtio_blk_create_transport", 00:04:53.331 "virtio_blk_get_transports", 00:04:53.331 "vhost_controller_set_coalescing", 00:04:53.331 "vhost_get_controllers", 00:04:53.331 "vhost_delete_controller", 00:04:53.331 "vhost_create_blk_controller", 00:04:53.331 "vhost_scsi_controller_remove_target", 00:04:53.331 "vhost_scsi_controller_add_target", 00:04:53.331 "vhost_start_scsi_controller", 00:04:53.331 "vhost_create_scsi_controller", 00:04:53.331 "thread_set_cpumask", 00:04:53.331 "framework_get_governor", 00:04:53.331 "framework_get_scheduler", 00:04:53.331 "framework_set_scheduler", 00:04:53.331 "framework_get_reactors", 00:04:53.331 "thread_get_io_channels", 00:04:53.331 "thread_get_pollers", 00:04:53.331 "thread_get_stats", 00:04:53.331 "framework_monitor_context_switch", 00:04:53.331 "spdk_kill_instance", 00:04:53.331 "log_enable_timestamps", 00:04:53.331 "log_get_flags", 00:04:53.331 "log_clear_flag", 00:04:53.331 "log_set_flag", 00:04:53.331 "log_get_level", 00:04:53.331 "log_set_level", 00:04:53.331 "log_get_print_level", 00:04:53.331 "log_set_print_level", 00:04:53.331 "framework_enable_cpumask_locks", 00:04:53.331 "framework_disable_cpumask_locks", 00:04:53.331 "framework_wait_init", 00:04:53.331 "framework_start_init", 00:04:53.331 "scsi_get_devices", 00:04:53.331 "bdev_get_histogram", 00:04:53.331 "bdev_enable_histogram", 00:04:53.331 "bdev_set_qos_limit", 00:04:53.331 "bdev_set_qd_sampling_period", 00:04:53.331 "bdev_get_bdevs", 00:04:53.331 "bdev_reset_iostat", 00:04:53.331 "bdev_get_iostat", 00:04:53.331 "bdev_examine", 00:04:53.331 "bdev_wait_for_examine", 00:04:53.331 "bdev_set_options", 00:04:53.331 "notify_get_notifications", 00:04:53.331 "notify_get_types", 00:04:53.331 "accel_get_stats", 00:04:53.331 "accel_set_options", 00:04:53.331 "accel_set_driver", 00:04:53.331 "accel_crypto_key_destroy", 00:04:53.331 "accel_crypto_keys_get", 00:04:53.331 "accel_crypto_key_create", 00:04:53.331 "accel_assign_opc", 00:04:53.331 "accel_get_module_info", 00:04:53.331 "accel_get_opc_assignments", 00:04:53.331 "vmd_rescan", 00:04:53.331 "vmd_remove_device", 00:04:53.331 "vmd_enable", 00:04:53.331 "sock_get_default_impl", 00:04:53.331 "sock_set_default_impl", 00:04:53.331 "sock_impl_set_options", 00:04:53.331 "sock_impl_get_options", 00:04:53.331 "iobuf_get_stats", 00:04:53.331 "iobuf_set_options", 00:04:53.331 "framework_get_pci_devices", 00:04:53.331 "framework_get_config", 00:04:53.331 "framework_get_subsystems", 00:04:53.331 "trace_get_info", 00:04:53.331 "trace_get_tpoint_group_mask", 00:04:53.331 "trace_disable_tpoint_group", 00:04:53.331 "trace_enable_tpoint_group", 00:04:53.331 "trace_clear_tpoint_mask", 00:04:53.331 "trace_set_tpoint_mask", 00:04:53.331 "keyring_get_keys", 00:04:53.331 "spdk_get_version", 00:04:53.331 "rpc_get_methods" 00:04:53.331 ] 00:04:53.331 11:16:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:53.331 11:16:09 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.331 11:16:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.590 11:16:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:53.590 11:16:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59773 00:04:53.590 11:16:09 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59773 ']' 00:04:53.590 11:16:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59773 00:04:53.590 11:16:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:53.590 11:16:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.590 11:16:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59773 00:04:53.590 killing process with pid 59773 00:04:53.590 11:16:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.590 11:16:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.590 11:16:09 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59773' 00:04:53.590 11:16:09 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59773 00:04:53.590 11:16:09 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59773 00:04:56.124 ************************************ 00:04:56.124 END TEST spdkcli_tcp 00:04:56.124 ************************************ 00:04:56.124 00:04:56.124 real 0m4.161s 00:04:56.124 user 0m7.194s 00:04:56.124 sys 0m0.715s 00:04:56.124 11:16:11 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.124 11:16:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.124 11:16:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.124 11:16:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.124 11:16:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.124 11:16:11 -- common/autotest_common.sh@10 -- # set +x 00:04:56.124 ************************************ 00:04:56.124 START TEST dpdk_mem_utility 00:04:56.124 ************************************ 00:04:56.124 11:16:11 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.124 * Looking for test storage... 00:04:56.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:56.124 11:16:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.124 11:16:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59887 00:04:56.124 11:16:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59887 00:04:56.124 11:16:11 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59887 ']' 00:04:56.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.124 11:16:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.124 11:16:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.124 11:16:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.124 11:16:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.124 11:16:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.124 11:16:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.124 [2024-07-25 11:16:11.763205] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:04:56.124 [2024-07-25 11:16:11.763392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59887 ] 00:04:56.124 [2024-07-25 11:16:11.937958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.383 [2024-07-25 11:16:12.171913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.321 11:16:12 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.321 11:16:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:57.321 11:16:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:57.321 11:16:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:57.321 11:16:12 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.321 11:16:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.321 { 00:04:57.321 "filename": "/tmp/spdk_mem_dump.txt" 00:04:57.321 } 00:04:57.321 11:16:12 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.321 11:16:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.321 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:57.321 1 heaps totaling size 820.000000 MiB 00:04:57.321 size: 820.000000 MiB heap id: 0 00:04:57.321 end heaps---------- 00:04:57.321 8 mempools totaling size 598.116089 MiB 00:04:57.321 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:57.321 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:57.321 size: 84.521057 MiB name: bdev_io_59887 00:04:57.321 size: 51.011292 MiB name: evtpool_59887 00:04:57.321 size: 50.003479 MiB name: msgpool_59887 00:04:57.321 size: 21.763794 MiB name: PDU_Pool 00:04:57.321 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:57.321 size: 0.026123 MiB name: Session_Pool 00:04:57.321 end mempools------- 00:04:57.321 6 memzones totaling size 4.142822 MiB 00:04:57.321 size: 1.000366 MiB name: RG_ring_0_59887 00:04:57.321 size: 1.000366 MiB name: RG_ring_1_59887 00:04:57.321 size: 1.000366 MiB name: RG_ring_4_59887 00:04:57.321 size: 1.000366 MiB name: RG_ring_5_59887 00:04:57.321 size: 0.125366 MiB name: RG_ring_2_59887 00:04:57.321 size: 0.015991 MiB name: RG_ring_3_59887 00:04:57.321 end memzones------- 00:04:57.321 11:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:57.321 heap id: 0 total size: 820.000000 MiB number of busy elements: 297 number of free elements: 18 00:04:57.321 list of free elements. size: 18.452271 MiB 00:04:57.321 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:57.321 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:57.321 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:57.321 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:57.321 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:57.321 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:57.321 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:57.321 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:57.321 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:57.321 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:57.321 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:57.321 element at address: 0x200000200000 with size: 0.830200 MiB 00:04:57.321 element at address: 0x20001b000000 with size: 0.564880 MiB 00:04:57.321 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:57.321 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:57.321 element at address: 0x200013800000 with size: 0.467651 MiB 00:04:57.321 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:57.321 element at address: 0x200003a00000 with size: 0.351990 MiB 00:04:57.321 list of standard malloc elements. size: 199.283325 MiB 00:04:57.321 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:57.321 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:57.321 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:57.321 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:57.321 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:57.321 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:57.321 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:57.321 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:57.321 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:57.321 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:57.321 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:57.321 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:57.321 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013877b80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013877c80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013877d80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013877e80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013877f80 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013878080 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:57.322 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:57.323 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:57.323 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:57.323 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:57.323 list of memzone associated elements. size: 602.264404 MiB 00:04:57.323 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:57.323 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:57.323 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:57.323 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:57.323 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:57.323 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59887_0 00:04:57.323 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:57.323 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59887_0 00:04:57.323 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:57.323 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59887_0 00:04:57.323 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:57.323 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:57.323 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:57.323 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:57.323 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:57.323 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59887 00:04:57.323 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:57.323 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59887 00:04:57.323 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:57.323 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59887 00:04:57.323 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:57.323 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:57.323 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:57.323 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:57.323 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:57.323 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:57.323 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:57.324 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:57.324 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:57.324 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59887 00:04:57.324 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:57.324 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59887 00:04:57.324 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:57.324 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59887 00:04:57.324 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:57.324 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59887 00:04:57.324 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:57.324 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59887 00:04:57.324 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:57.324 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:57.324 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:57.324 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:57.324 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:57.324 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:57.324 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:57.324 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59887 00:04:57.324 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:57.324 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:57.324 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:57.324 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:57.324 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:57.324 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59887 00:04:57.324 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:57.324 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:57.324 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:57.324 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59887 00:04:57.324 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:57.324 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59887 00:04:57.324 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:57.324 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:57.324 11:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:57.324 11:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59887 00:04:57.324 11:16:13 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59887 ']' 00:04:57.324 11:16:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59887 00:04:57.324 11:16:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:57.324 11:16:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.324 11:16:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59887 00:04:57.324 killing process with pid 59887 00:04:57.324 11:16:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.324 11:16:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.324 11:16:13 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59887' 00:04:57.324 11:16:13 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59887 00:04:57.324 11:16:13 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59887 00:04:59.857 00:04:59.857 real 0m3.861s 00:04:59.857 user 0m3.857s 00:04:59.857 sys 0m0.549s 00:04:59.857 11:16:15 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.857 11:16:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.857 ************************************ 00:04:59.857 END TEST dpdk_mem_utility 00:04:59.857 ************************************ 00:04:59.857 11:16:15 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:59.857 11:16:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.857 11:16:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.857 11:16:15 -- common/autotest_common.sh@10 -- # set +x 00:04:59.857 ************************************ 00:04:59.857 START TEST event 00:04:59.857 ************************************ 00:04:59.857 11:16:15 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:59.857 * Looking for test storage... 00:04:59.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:59.857 11:16:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:59.857 11:16:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:59.857 11:16:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.857 11:16:15 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:59.857 11:16:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.857 11:16:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.857 ************************************ 00:04:59.857 START TEST event_perf 00:04:59.857 ************************************ 00:04:59.857 11:16:15 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.857 Running I/O for 1 seconds...[2024-07-25 11:16:15.612497] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:04:59.857 [2024-07-25 11:16:15.613089] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59987 ] 00:05:00.116 [2024-07-25 11:16:15.795317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.375 [2024-07-25 11:16:16.088314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.375 [2024-07-25 11:16:16.088473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.375 [2024-07-25 11:16:16.088576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.375 Running I/O for 1 seconds...[2024-07-25 11:16:16.088596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.751 00:05:01.751 lcore 0: 177314 00:05:01.751 lcore 1: 177315 00:05:01.751 lcore 2: 177312 00:05:01.751 lcore 3: 177312 00:05:01.751 done. 00:05:01.751 00:05:01.751 real 0m1.945s 00:05:01.751 user 0m4.645s 00:05:01.751 sys 0m0.162s 00:05:01.751 11:16:17 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.751 11:16:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.751 ************************************ 00:05:01.751 END TEST event_perf 00:05:01.751 ************************************ 00:05:01.751 11:16:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.751 11:16:17 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:01.751 11:16:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.751 11:16:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.751 ************************************ 00:05:01.751 START TEST event_reactor 00:05:01.751 ************************************ 00:05:01.751 11:16:17 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:01.751 [2024-07-25 11:16:17.595910] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:01.752 [2024-07-25 11:16:17.596228] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60032 ] 00:05:02.010 [2024-07-25 11:16:17.759958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.269 [2024-07-25 11:16:17.997537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.667 test_start 00:05:03.667 oneshot 00:05:03.667 tick 100 00:05:03.667 tick 100 00:05:03.667 tick 250 00:05:03.667 tick 100 00:05:03.667 tick 100 00:05:03.667 tick 100 00:05:03.667 tick 250 00:05:03.667 tick 500 00:05:03.667 tick 100 00:05:03.667 tick 100 00:05:03.667 tick 250 00:05:03.667 tick 100 00:05:03.667 tick 100 00:05:03.667 test_end 00:05:03.667 00:05:03.667 real 0m1.844s 00:05:03.667 user 0m1.619s 00:05:03.667 sys 0m0.115s 00:05:03.667 11:16:19 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.667 11:16:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:03.667 ************************************ 00:05:03.667 END TEST event_reactor 00:05:03.667 ************************************ 00:05:03.667 11:16:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.667 11:16:19 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:03.667 11:16:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.667 11:16:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.667 ************************************ 00:05:03.667 START TEST event_reactor_perf 00:05:03.667 ************************************ 00:05:03.667 11:16:19 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:03.667 [2024-07-25 11:16:19.493489] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:03.667 [2024-07-25 11:16:19.494106] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60074 ] 00:05:03.925 [2024-07-25 11:16:19.669256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.185 [2024-07-25 11:16:19.937691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.560 test_start 00:05:05.560 test_end 00:05:05.560 Performance: 278146 events per second 00:05:05.560 ************************************ 00:05:05.560 END TEST event_reactor_perf 00:05:05.560 ************************************ 00:05:05.560 00:05:05.560 real 0m1.903s 00:05:05.560 user 0m1.684s 00:05:05.560 sys 0m0.105s 00:05:05.560 11:16:21 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.560 11:16:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.560 11:16:21 event -- event/event.sh@49 -- # uname -s 00:05:05.560 11:16:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:05.560 11:16:21 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.560 11:16:21 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.560 11:16:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.560 11:16:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.560 ************************************ 00:05:05.560 START TEST event_scheduler 00:05:05.560 ************************************ 00:05:05.560 11:16:21 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:05.818 * Looking for test storage... 00:05:05.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:05.818 11:16:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:05.819 11:16:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60142 00:05:05.819 11:16:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.819 11:16:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60142 00:05:05.819 11:16:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:05.819 11:16:21 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60142 ']' 00:05:05.819 11:16:21 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.819 11:16:21 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.819 11:16:21 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.819 11:16:21 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.819 11:16:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.819 [2024-07-25 11:16:21.585564] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:05.819 [2024-07-25 11:16:21.585768] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60142 ] 00:05:06.078 [2024-07-25 11:16:21.763459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.336 [2024-07-25 11:16:22.040005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.336 [2024-07-25 11:16:22.040172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.336 [2024-07-25 11:16:22.040297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.336 [2024-07-25 11:16:22.040648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.903 11:16:22 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.903 11:16:22 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:06.903 11:16:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:06.903 11:16:22 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.903 11:16:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.903 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.903 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.903 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.903 POWER: Cannot set governor of lcore 0 to performance 00:05:06.903 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.903 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.903 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.903 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.903 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:06.903 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:06.903 POWER: Unable to set Power Management Environment for lcore 0 00:05:06.903 [2024-07-25 11:16:22.522719] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:06.903 [2024-07-25 11:16:22.522753] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:06.903 [2024-07-25 11:16:22.522789] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:06.903 [2024-07-25 11:16:22.522856] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:06.903 [2024-07-25 11:16:22.522894] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:06.903 [2024-07-25 11:16:22.522916] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:06.903 11:16:22 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.903 11:16:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:06.903 11:16:22 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.903 11:16:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.162 [2024-07-25 11:16:22.843864] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.162 11:16:22 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.162 11:16:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.162 11:16:22 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.162 11:16:22 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.162 11:16:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.162 ************************************ 00:05:07.162 START TEST scheduler_create_thread 00:05:07.163 ************************************ 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 2 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 3 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 4 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 5 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 6 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 7 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 8 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 9 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 10 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.163 11:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.565 ************************************ 00:05:08.565 END TEST scheduler_create_thread 00:05:08.565 ************************************ 00:05:08.565 11:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.565 00:05:08.565 real 0m1.177s 00:05:08.565 user 0m0.014s 00:05:08.565 sys 0m0.005s 00:05:08.565 11:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.565 11:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.565 11:16:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:08.565 11:16:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60142 00:05:08.565 11:16:24 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60142 ']' 00:05:08.565 11:16:24 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60142 00:05:08.566 11:16:24 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:08.566 11:16:24 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.566 11:16:24 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60142 00:05:08.566 killing process with pid 60142 00:05:08.566 11:16:24 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:08.566 11:16:24 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:08.566 11:16:24 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60142' 00:05:08.566 11:16:24 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60142 00:05:08.566 11:16:24 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60142 00:05:08.823 [2024-07-25 11:16:24.511211] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.197 00:05:10.197 real 0m4.307s 00:05:10.197 user 0m7.014s 00:05:10.197 sys 0m0.510s 00:05:10.197 11:16:25 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.197 11:16:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.197 ************************************ 00:05:10.197 END TEST event_scheduler 00:05:10.197 ************************************ 00:05:10.197 11:16:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.197 11:16:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.197 11:16:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.197 11:16:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.197 11:16:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.197 ************************************ 00:05:10.197 START TEST app_repeat 00:05:10.197 ************************************ 00:05:10.197 11:16:25 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:10.197 11:16:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.197 11:16:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.197 11:16:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.198 Process app_repeat pid: 60237 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60237 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60237' 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.198 spdk_app_start Round 0 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.198 11:16:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60237 /var/tmp/spdk-nbd.sock 00:05:10.198 11:16:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60237 ']' 00:05:10.198 11:16:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.198 11:16:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.198 11:16:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.198 11:16:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.198 11:16:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.198 [2024-07-25 11:16:25.825521] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:10.198 [2024-07-25 11:16:25.825737] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60237 ] 00:05:10.198 [2024-07-25 11:16:25.999922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.456 [2024-07-25 11:16:26.245594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.456 [2024-07-25 11:16:26.245602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.022 11:16:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.022 11:16:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:11.022 11:16:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.589 Malloc0 00:05:11.589 11:16:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.847 Malloc1 00:05:11.847 11:16:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.847 11:16:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.173 /dev/nbd0 00:05:12.173 11:16:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.173 11:16:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.173 1+0 records in 00:05:12.173 1+0 records out 00:05:12.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333649 s, 12.3 MB/s 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:12.173 11:16:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:12.173 11:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.173 11:16:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.173 11:16:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.431 /dev/nbd1 00:05:12.431 11:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.431 11:16:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.431 11:16:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:12.431 11:16:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:12.431 11:16:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:12.431 11:16:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:12.431 11:16:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:12.431 11:16:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:12.431 11:16:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:12.432 11:16:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:12.432 11:16:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.432 1+0 records in 00:05:12.432 1+0 records out 00:05:12.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349681 s, 11.7 MB/s 00:05:12.432 11:16:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.432 11:16:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:12.432 11:16:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.432 11:16:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:12.432 11:16:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:12.432 11:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.432 11:16:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.432 11:16:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.432 11:16:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.432 11:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.690 { 00:05:12.690 "nbd_device": "/dev/nbd0", 00:05:12.690 "bdev_name": "Malloc0" 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "nbd_device": "/dev/nbd1", 00:05:12.690 "bdev_name": "Malloc1" 00:05:12.690 } 00:05:12.690 ]' 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.690 { 00:05:12.690 "nbd_device": "/dev/nbd0", 00:05:12.690 "bdev_name": "Malloc0" 00:05:12.690 }, 00:05:12.690 { 00:05:12.690 "nbd_device": "/dev/nbd1", 00:05:12.690 "bdev_name": "Malloc1" 00:05:12.690 } 00:05:12.690 ]' 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.690 /dev/nbd1' 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.690 /dev/nbd1' 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.690 256+0 records in 00:05:12.690 256+0 records out 00:05:12.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00893492 s, 117 MB/s 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.690 256+0 records in 00:05:12.690 256+0 records out 00:05:12.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256479 s, 40.9 MB/s 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.690 11:16:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.947 256+0 records in 00:05:12.947 256+0 records out 00:05:12.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369927 s, 28.3 MB/s 00:05:12.947 11:16:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.947 11:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.947 11:16:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.948 11:16:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.205 11:16:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.463 11:16:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.721 11:16:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.721 11:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.721 11:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.980 11:16:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.980 11:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.980 11:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.980 11:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.980 11:16:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.980 11:16:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.980 11:16:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.980 11:16:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.980 11:16:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.980 11:16:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.238 11:16:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.613 [2024-07-25 11:16:31.297813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.872 [2024-07-25 11:16:31.542260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.872 [2024-07-25 11:16:31.542260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.872 [2024-07-25 11:16:31.734667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.872 [2024-07-25 11:16:31.734957] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.246 11:16:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.246 11:16:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.246 spdk_app_start Round 1 00:05:17.246 11:16:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60237 /var/tmp/spdk-nbd.sock 00:05:17.246 11:16:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60237 ']' 00:05:17.246 11:16:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.246 11:16:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.246 11:16:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.246 11:16:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.246 11:16:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.652 11:16:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.652 11:16:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:17.652 11:16:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.910 Malloc0 00:05:17.910 11:16:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.169 Malloc1 00:05:18.169 11:16:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.169 11:16:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.429 /dev/nbd0 00:05:18.429 11:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.429 11:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.429 11:16:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:18.429 11:16:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:18.429 11:16:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:18.429 11:16:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:18.429 11:16:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:18.687 11:16:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:18.687 11:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:18.687 11:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:18.687 11:16:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.687 1+0 records in 00:05:18.687 1+0 records out 00:05:18.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744201 s, 5.5 MB/s 00:05:18.687 11:16:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.687 11:16:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:18.687 11:16:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.687 11:16:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:18.687 11:16:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:18.687 11:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.687 11:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.687 11:16:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.946 /dev/nbd1 00:05:18.946 11:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.946 11:16:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.946 1+0 records in 00:05:18.946 1+0 records out 00:05:18.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028097 s, 14.6 MB/s 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:18.946 11:16:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:18.946 11:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.946 11:16:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.946 11:16:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.946 11:16:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.946 11:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.204 { 00:05:19.204 "nbd_device": "/dev/nbd0", 00:05:19.204 "bdev_name": "Malloc0" 00:05:19.204 }, 00:05:19.204 { 00:05:19.204 "nbd_device": "/dev/nbd1", 00:05:19.204 "bdev_name": "Malloc1" 00:05:19.204 } 00:05:19.204 ]' 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.204 { 00:05:19.204 "nbd_device": "/dev/nbd0", 00:05:19.204 "bdev_name": "Malloc0" 00:05:19.204 }, 00:05:19.204 { 00:05:19.204 "nbd_device": "/dev/nbd1", 00:05:19.204 "bdev_name": "Malloc1" 00:05:19.204 } 00:05:19.204 ]' 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.204 /dev/nbd1' 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.204 /dev/nbd1' 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.204 256+0 records in 00:05:19.204 256+0 records out 00:05:19.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00689627 s, 152 MB/s 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.204 11:16:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.204 256+0 records in 00:05:19.204 256+0 records out 00:05:19.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261027 s, 40.2 MB/s 00:05:19.204 11:16:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.204 11:16:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.204 256+0 records in 00:05:19.204 256+0 records out 00:05:19.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034002 s, 30.8 MB/s 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.205 11:16:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.464 11:16:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.723 11:16:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.983 11:16:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.983 11:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.983 11:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.241 11:16:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.241 11:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.241 11:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.241 11:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.241 11:16:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.241 11:16:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.241 11:16:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.241 11:16:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.241 11:16:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.241 11:16:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.499 11:16:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.874 [2024-07-25 11:16:37.580264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.133 [2024-07-25 11:16:37.821305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.133 [2024-07-25 11:16:37.821307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.133 [2024-07-25 11:16:38.011777] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.133 [2024-07-25 11:16:38.011877] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.509 11:16:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.509 spdk_app_start Round 2 00:05:23.509 11:16:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:23.509 11:16:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60237 /var/tmp/spdk-nbd.sock 00:05:23.509 11:16:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60237 ']' 00:05:23.509 11:16:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.509 11:16:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.509 11:16:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.509 11:16:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.509 11:16:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.075 11:16:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.075 11:16:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:24.075 11:16:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.333 Malloc0 00:05:24.333 11:16:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.591 Malloc1 00:05:24.591 11:16:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.591 11:16:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.849 /dev/nbd0 00:05:24.849 11:16:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.849 11:16:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.849 1+0 records in 00:05:24.849 1+0 records out 00:05:24.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280066 s, 14.6 MB/s 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:24.849 11:16:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:24.849 11:16:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.849 11:16:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.849 11:16:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.108 /dev/nbd1 00:05:25.108 11:16:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.108 11:16:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.108 1+0 records in 00:05:25.108 1+0 records out 00:05:25.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310491 s, 13.2 MB/s 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.108 11:16:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.108 11:16:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.108 11:16:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.108 11:16:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.108 11:16:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.108 11:16:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.366 11:16:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.366 { 00:05:25.366 "nbd_device": "/dev/nbd0", 00:05:25.366 "bdev_name": "Malloc0" 00:05:25.366 }, 00:05:25.366 { 00:05:25.366 "nbd_device": "/dev/nbd1", 00:05:25.366 "bdev_name": "Malloc1" 00:05:25.366 } 00:05:25.366 ]' 00:05:25.366 11:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.366 { 00:05:25.366 "nbd_device": "/dev/nbd0", 00:05:25.366 "bdev_name": "Malloc0" 00:05:25.366 }, 00:05:25.366 { 00:05:25.366 "nbd_device": "/dev/nbd1", 00:05:25.366 "bdev_name": "Malloc1" 00:05:25.366 } 00:05:25.366 ]' 00:05:25.366 11:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.366 11:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.366 /dev/nbd1' 00:05:25.366 11:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.366 /dev/nbd1' 00:05:25.366 11:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.366 11:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.366 11:16:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.366 11:16:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.366 11:16:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.367 11:16:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.367 11:16:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.367 11:16:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.367 11:16:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.367 11:16:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.367 11:16:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.367 11:16:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.367 256+0 records in 00:05:25.367 256+0 records out 00:05:25.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105824 s, 99.1 MB/s 00:05:25.367 11:16:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.367 11:16:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.625 256+0 records in 00:05:25.625 256+0 records out 00:05:25.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289468 s, 36.2 MB/s 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.625 256+0 records in 00:05:25.625 256+0 records out 00:05:25.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318934 s, 32.9 MB/s 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.625 11:16:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.886 11:16:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.886 11:16:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.886 11:16:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.886 11:16:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.886 11:16:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.886 11:16:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.886 11:16:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.886 11:16:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.886 11:16:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.886 11:16:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.145 11:16:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.403 11:16:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.403 11:16:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.403 11:16:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.403 11:16:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.403 11:16:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.403 11:16:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.403 11:16:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.403 11:16:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.404 11:16:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.404 11:16:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.404 11:16:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.404 11:16:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.404 11:16:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.970 11:16:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.348 [2024-07-25 11:16:43.827284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.348 [2024-07-25 11:16:44.040121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.348 [2024-07-25 11:16:44.040132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.608 [2024-07-25 11:16:44.230009] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.608 [2024-07-25 11:16:44.230131] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.986 11:16:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60237 /var/tmp/spdk-nbd.sock 00:05:29.986 11:16:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60237 ']' 00:05:29.986 11:16:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.986 11:16:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.986 11:16:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.986 11:16:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.986 11:16:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:30.244 11:16:45 event.app_repeat -- event/event.sh@39 -- # killprocess 60237 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60237 ']' 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60237 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60237 00:05:30.244 killing process with pid 60237 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60237' 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60237 00:05:30.244 11:16:45 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60237 00:05:31.182 spdk_app_start is called in Round 0. 00:05:31.182 Shutdown signal received, stop current app iteration 00:05:31.182 Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 reinitialization... 00:05:31.182 spdk_app_start is called in Round 1. 00:05:31.182 Shutdown signal received, stop current app iteration 00:05:31.182 Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 reinitialization... 00:05:31.182 spdk_app_start is called in Round 2. 00:05:31.182 Shutdown signal received, stop current app iteration 00:05:31.182 Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 reinitialization... 00:05:31.182 spdk_app_start is called in Round 3. 00:05:31.182 Shutdown signal received, stop current app iteration 00:05:31.452 ************************************ 00:05:31.452 END TEST app_repeat 00:05:31.452 ************************************ 00:05:31.452 11:16:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:31.452 11:16:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:31.452 00:05:31.452 real 0m21.302s 00:05:31.452 user 0m45.614s 00:05:31.452 sys 0m3.115s 00:05:31.452 11:16:47 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.452 11:16:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.452 11:16:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:31.452 11:16:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.452 11:16:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.452 11:16:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.452 11:16:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.452 ************************************ 00:05:31.452 START TEST cpu_locks 00:05:31.452 ************************************ 00:05:31.452 11:16:47 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.452 * Looking for test storage... 00:05:31.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.452 11:16:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.452 11:16:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.452 11:16:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.452 11:16:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.452 11:16:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.452 11:16:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.452 11:16:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.452 ************************************ 00:05:31.452 START TEST default_locks 00:05:31.452 ************************************ 00:05:31.452 11:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:31.452 11:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60693 00:05:31.452 11:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.452 11:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60693 00:05:31.452 11:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60693 ']' 00:05:31.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.452 11:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.452 11:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.452 11:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.452 11:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.452 11:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.725 [2024-07-25 11:16:47.340939] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:31.725 [2024-07-25 11:16:47.342236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60693 ] 00:05:31.725 [2024-07-25 11:16:47.512161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.984 [2024-07-25 11:16:47.739190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.921 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.921 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:32.921 11:16:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60693 00:05:32.921 11:16:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.921 11:16:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60693 00:05:33.179 11:16:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60693 00:05:33.179 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60693 ']' 00:05:33.179 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60693 00:05:33.179 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:33.179 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.179 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60693 00:05:33.180 killing process with pid 60693 00:05:33.180 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.180 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.180 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60693' 00:05:33.180 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60693 00:05:33.180 11:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60693 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60693 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60693 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60693 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60693 ']' 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.711 ERROR: process (pid: 60693) is no longer running 00:05:35.711 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60693) - No such process 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.711 00:05:35.711 real 0m4.065s 00:05:35.711 user 0m3.967s 00:05:35.711 sys 0m0.729s 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.711 11:16:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.711 ************************************ 00:05:35.711 END TEST default_locks 00:05:35.711 ************************************ 00:05:35.711 11:16:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:35.711 11:16:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.711 11:16:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.711 11:16:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.711 ************************************ 00:05:35.711 START TEST default_locks_via_rpc 00:05:35.711 ************************************ 00:05:35.711 11:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:35.711 11:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60768 00:05:35.711 11:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60768 00:05:35.711 11:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60768 ']' 00:05:35.711 11:16:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.711 11:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.711 11:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.711 11:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.711 11:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.711 11:16:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.711 [2024-07-25 11:16:51.457405] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:35.711 [2024-07-25 11:16:51.457587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60768 ] 00:05:35.970 [2024-07-25 11:16:51.631735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.228 [2024-07-25 11:16:51.907477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60768 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60768 00:05:37.184 11:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60768 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60768 ']' 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60768 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60768 00:05:37.443 killing process with pid 60768 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60768' 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60768 00:05:37.443 11:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60768 00:05:39.980 ************************************ 00:05:39.980 END TEST default_locks_via_rpc 00:05:39.980 ************************************ 00:05:39.980 00:05:39.980 real 0m4.110s 00:05:39.980 user 0m4.062s 00:05:39.980 sys 0m0.747s 00:05:39.980 11:16:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.980 11:16:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.980 11:16:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.980 11:16:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.980 11:16:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.980 11:16:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.980 ************************************ 00:05:39.980 START TEST non_locking_app_on_locked_coremask 00:05:39.980 ************************************ 00:05:39.980 11:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:39.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.980 11:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60842 00:05:39.980 11:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60842 /var/tmp/spdk.sock 00:05:39.980 11:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.980 11:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60842 ']' 00:05:39.980 11:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.980 11:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.980 11:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.980 11:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.980 11:16:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.980 [2024-07-25 11:16:55.626842] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:39.980 [2024-07-25 11:16:55.627064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60842 ] 00:05:39.980 [2024-07-25 11:16:55.805824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.300 [2024-07-25 11:16:56.079295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60863 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60863 /var/tmp/spdk2.sock 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60863 ']' 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.252 11:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.252 [2024-07-25 11:16:57.053411] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:41.252 [2024-07-25 11:16:57.054018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60863 ] 00:05:41.511 [2024-07-25 11:16:57.235031] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.511 [2024-07-25 11:16:57.235100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.078 [2024-07-25 11:16:57.727674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.982 11:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.982 11:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:43.982 11:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60842 00:05:43.982 11:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.982 11:16:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60842 00:05:44.549 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60842 00:05:44.549 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60842 ']' 00:05:44.549 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60842 00:05:44.549 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:44.549 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.549 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60842 00:05:44.807 killing process with pid 60842 00:05:44.807 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.807 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.807 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60842' 00:05:44.807 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60842 00:05:44.808 11:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60842 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60863 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60863 ']' 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60863 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60863 00:05:50.078 killing process with pid 60863 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60863' 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60863 00:05:50.078 11:17:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60863 00:05:51.505 ************************************ 00:05:51.505 END TEST non_locking_app_on_locked_coremask 00:05:51.505 ************************************ 00:05:51.505 00:05:51.505 real 0m11.682s 00:05:51.505 user 0m12.093s 00:05:51.505 sys 0m1.468s 00:05:51.505 11:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.505 11:17:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.505 11:17:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:51.505 11:17:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.505 11:17:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.505 11:17:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.505 ************************************ 00:05:51.505 START TEST locking_app_on_unlocked_coremask 00:05:51.505 ************************************ 00:05:51.505 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:51.505 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:51.505 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61016 00:05:51.505 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61016 /var/tmp/spdk.sock 00:05:51.505 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61016 ']' 00:05:51.505 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.505 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.505 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.505 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.505 11:17:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.505 [2024-07-25 11:17:07.364649] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:51.505 [2024-07-25 11:17:07.364844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61016 ] 00:05:51.764 [2024-07-25 11:17:07.539247] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.764 [2024-07-25 11:17:07.539358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.023 [2024-07-25 11:17:07.785960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61037 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61037 /var/tmp/spdk2.sock 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61037 ']' 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.957 11:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.957 [2024-07-25 11:17:08.773644] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:05:52.957 [2024-07-25 11:17:08.773825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61037 ] 00:05:53.215 [2024-07-25 11:17:08.953990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.780 [2024-07-25 11:17:09.436766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.688 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.688 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.688 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61037 00:05:55.688 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61037 00:05:55.688 11:17:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61016 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61016 ']' 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 61016 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61016 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.621 killing process with pid 61016 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61016' 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 61016 00:05:56.621 11:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 61016 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61037 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61037 ']' 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 61037 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61037 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.887 killing process with pid 61037 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61037' 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 61037 00:06:01.887 11:17:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 61037 00:06:03.819 00:06:03.819 real 0m11.981s 00:06:03.819 user 0m12.453s 00:06:03.819 sys 0m1.522s 00:06:03.819 ************************************ 00:06:03.819 END TEST locking_app_on_unlocked_coremask 00:06:03.819 ************************************ 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.819 11:17:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.819 11:17:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.819 11:17:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.819 11:17:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.819 ************************************ 00:06:03.819 START TEST locking_app_on_locked_coremask 00:06:03.819 ************************************ 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61186 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61186 /var/tmp/spdk.sock 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61186 ']' 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.819 11:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.819 [2024-07-25 11:17:19.373510] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:03.819 [2024-07-25 11:17:19.373705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61186 ] 00:06:03.819 [2024-07-25 11:17:19.545364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.078 [2024-07-25 11:17:19.773487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61208 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61208 /var/tmp/spdk2.sock 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61208 /var/tmp/spdk2.sock 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61208 /var/tmp/spdk2.sock 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61208 ']' 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.015 11:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.015 [2024-07-25 11:17:20.721862] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:05.015 [2024-07-25 11:17:20.722651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61208 ] 00:06:05.274 [2024-07-25 11:17:20.899732] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61186 has claimed it. 00:06:05.274 [2024-07-25 11:17:20.899840] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.533 ERROR: process (pid: 61208) is no longer running 00:06:05.533 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61208) - No such process 00:06:05.533 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.533 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:05.533 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:05.533 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.533 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.533 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.533 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61186 00:06:05.533 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61186 00:06:05.533 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61186 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61186 ']' 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61186 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61186 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.100 killing process with pid 61186 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61186' 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61186 00:06:06.100 11:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61186 00:06:08.632 00:06:08.632 real 0m4.831s 00:06:08.632 user 0m5.027s 00:06:08.632 sys 0m0.897s 00:06:08.632 11:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.632 11:17:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.632 ************************************ 00:06:08.632 END TEST locking_app_on_locked_coremask 00:06:08.632 ************************************ 00:06:08.632 11:17:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.632 11:17:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.632 11:17:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.632 11:17:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.632 ************************************ 00:06:08.632 START TEST locking_overlapped_coremask 00:06:08.632 ************************************ 00:06:08.632 11:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:08.632 11:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61278 00:06:08.632 11:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61278 /var/tmp/spdk.sock 00:06:08.632 11:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61278 ']' 00:06:08.632 11:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.632 11:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.632 11:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.632 11:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.632 11:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.632 11:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.632 [2024-07-25 11:17:24.259130] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:08.632 [2024-07-25 11:17:24.259329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61278 ] 00:06:08.632 [2024-07-25 11:17:24.439471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.890 [2024-07-25 11:17:24.725577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.890 [2024-07-25 11:17:24.725742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.890 [2024-07-25 11:17:24.725749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61296 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61296 /var/tmp/spdk2.sock 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61296 /var/tmp/spdk2.sock 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61296 /var/tmp/spdk2.sock 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61296 ']' 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.825 11:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.825 [2024-07-25 11:17:25.651933] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:09.825 [2024-07-25 11:17:25.652080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61296 ] 00:06:10.084 [2024-07-25 11:17:25.824053] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61278 has claimed it. 00:06:10.084 [2024-07-25 11:17:25.824150] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.650 ERROR: process (pid: 61296) is no longer running 00:06:10.650 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61296) - No such process 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61278 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 61278 ']' 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 61278 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61278 00:06:10.650 killing process with pid 61278 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.650 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61278' 00:06:10.651 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 61278 00:06:10.651 11:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 61278 00:06:13.258 ************************************ 00:06:13.258 END TEST locking_overlapped_coremask 00:06:13.258 ************************************ 00:06:13.258 00:06:13.258 real 0m4.490s 00:06:13.258 user 0m11.585s 00:06:13.258 sys 0m0.674s 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.258 11:17:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:13.258 11:17:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.258 11:17:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.258 11:17:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.258 ************************************ 00:06:13.258 START TEST locking_overlapped_coremask_via_rpc 00:06:13.258 ************************************ 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61366 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61366 /var/tmp/spdk.sock 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61366 ']' 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.258 11:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.258 [2024-07-25 11:17:28.770171] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:13.258 [2024-07-25 11:17:28.770330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61366 ] 00:06:13.258 [2024-07-25 11:17:28.935083] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.258 [2024-07-25 11:17:28.935149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.538 [2024-07-25 11:17:29.179452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.538 [2024-07-25 11:17:29.179587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.538 [2024-07-25 11:17:29.179603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61384 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61384 /var/tmp/spdk2.sock 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61384 ']' 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.474 11:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.474 [2024-07-25 11:17:30.126838] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:14.474 [2024-07-25 11:17:30.127015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61384 ] 00:06:14.474 [2024-07-25 11:17:30.306666] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.474 [2024-07-25 11:17:30.306774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.040 [2024-07-25 11:17:30.795774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.040 [2024-07-25 11:17:30.795828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.040 [2024-07-25 11:17:30.795849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.941 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.199 [2024-07-25 11:17:32.821871] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61366 has claimed it. 00:06:17.199 request: 00:06:17.199 { 00:06:17.199 "method": "framework_enable_cpumask_locks", 00:06:17.199 "req_id": 1 00:06:17.199 } 00:06:17.199 Got JSON-RPC error response 00:06:17.199 response: 00:06:17.199 { 00:06:17.199 "code": -32603, 00:06:17.199 "message": "Failed to claim CPU core: 2" 00:06:17.199 } 00:06:17.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.199 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:17.199 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:17.199 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.199 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.199 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.199 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61366 /var/tmp/spdk.sock 00:06:17.199 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61366 ']' 00:06:17.200 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.200 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.200 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.200 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.200 11:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.458 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.458 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.458 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61384 /var/tmp/spdk2.sock 00:06:17.458 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61384 ']' 00:06:17.458 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.458 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.458 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.458 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.458 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.716 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.716 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.716 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:17.716 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.716 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.716 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.716 00:06:17.716 real 0m4.737s 00:06:17.716 user 0m1.598s 00:06:17.716 sys 0m0.248s 00:06:17.716 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.716 11:17:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.716 ************************************ 00:06:17.716 END TEST locking_overlapped_coremask_via_rpc 00:06:17.716 ************************************ 00:06:17.716 11:17:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:17.716 11:17:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61366 ]] 00:06:17.716 11:17:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61366 00:06:17.716 11:17:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61366 ']' 00:06:17.716 11:17:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61366 00:06:17.716 11:17:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:17.716 11:17:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.716 11:17:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61366 00:06:17.716 killing process with pid 61366 00:06:17.716 11:17:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.716 11:17:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.716 11:17:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61366' 00:06:17.716 11:17:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61366 00:06:17.716 11:17:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61366 00:06:20.320 11:17:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61384 ]] 00:06:20.320 11:17:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61384 00:06:20.320 11:17:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61384 ']' 00:06:20.320 11:17:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61384 00:06:20.320 11:17:35 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:20.320 11:17:35 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.320 11:17:35 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61384 00:06:20.320 killing process with pid 61384 00:06:20.320 11:17:35 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:20.320 11:17:35 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:20.320 11:17:35 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61384' 00:06:20.320 11:17:35 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61384 00:06:20.320 11:17:35 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61384 00:06:22.242 11:17:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.242 Process with pid 61366 is not found 00:06:22.242 Process with pid 61384 is not found 00:06:22.242 11:17:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:22.242 11:17:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61366 ]] 00:06:22.242 11:17:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61366 00:06:22.242 11:17:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61366 ']' 00:06:22.242 11:17:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61366 00:06:22.242 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61366) - No such process 00:06:22.242 11:17:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61366 is not found' 00:06:22.242 11:17:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61384 ]] 00:06:22.242 11:17:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61384 00:06:22.242 11:17:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61384 ']' 00:06:22.242 11:17:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61384 00:06:22.242 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61384) - No such process 00:06:22.242 11:17:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61384 is not found' 00:06:22.242 11:17:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:22.242 ************************************ 00:06:22.242 END TEST cpu_locks 00:06:22.242 ************************************ 00:06:22.242 00:06:22.242 real 0m50.926s 00:06:22.242 user 1m25.557s 00:06:22.242 sys 0m7.439s 00:06:22.242 11:17:38 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.242 11:17:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.242 ************************************ 00:06:22.242 END TEST event 00:06:22.242 ************************************ 00:06:22.242 00:06:22.242 real 1m22.615s 00:06:22.242 user 2m26.266s 00:06:22.242 sys 0m11.678s 00:06:22.242 11:17:38 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.242 11:17:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.242 11:17:38 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:22.242 11:17:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.242 11:17:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.242 11:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.501 ************************************ 00:06:22.501 START TEST thread 00:06:22.501 ************************************ 00:06:22.501 11:17:38 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:22.501 * Looking for test storage... 00:06:22.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:22.501 11:17:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.501 11:17:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:22.501 11:17:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.501 11:17:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.501 ************************************ 00:06:22.501 START TEST thread_poller_perf 00:06:22.501 ************************************ 00:06:22.501 11:17:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:22.501 [2024-07-25 11:17:38.249099] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:22.501 [2024-07-25 11:17:38.249243] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61571 ] 00:06:22.761 [2024-07-25 11:17:38.412780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.019 [2024-07-25 11:17:38.650205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.019 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:24.395 ====================================== 00:06:24.395 busy:2215592282 (cyc) 00:06:24.395 total_run_count: 302000 00:06:24.395 tsc_hz: 2200000000 (cyc) 00:06:24.395 ====================================== 00:06:24.395 poller_cost: 7336 (cyc), 3334 (nsec) 00:06:24.395 00:06:24.395 real 0m1.854s 00:06:24.395 user 0m1.619s 00:06:24.395 sys 0m0.123s 00:06:24.395 11:17:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.395 ************************************ 00:06:24.395 END TEST thread_poller_perf 00:06:24.395 ************************************ 00:06:24.395 11:17:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.395 11:17:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.395 11:17:40 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:24.395 11:17:40 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.395 11:17:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.395 ************************************ 00:06:24.395 START TEST thread_poller_perf 00:06:24.395 ************************************ 00:06:24.395 11:17:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:24.395 [2024-07-25 11:17:40.162211] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:24.395 [2024-07-25 11:17:40.162373] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61613 ] 00:06:24.654 [2024-07-25 11:17:40.335896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.912 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:24.912 [2024-07-25 11:17:40.604402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.289 ====================================== 00:06:26.289 busy:2203976613 (cyc) 00:06:26.289 total_run_count: 3717000 00:06:26.289 tsc_hz: 2200000000 (cyc) 00:06:26.289 ====================================== 00:06:26.289 poller_cost: 592 (cyc), 269 (nsec) 00:06:26.289 00:06:26.289 real 0m1.900s 00:06:26.289 user 0m1.685s 00:06:26.289 sys 0m0.104s 00:06:26.289 11:17:42 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.289 11:17:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.289 ************************************ 00:06:26.289 END TEST thread_poller_perf 00:06:26.289 ************************************ 00:06:26.289 11:17:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.289 00:06:26.289 real 0m3.931s 00:06:26.289 user 0m3.372s 00:06:26.289 sys 0m0.330s 00:06:26.289 11:17:42 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.289 11:17:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.289 ************************************ 00:06:26.289 END TEST thread 00:06:26.289 ************************************ 00:06:26.289 11:17:42 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:26.289 11:17:42 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.289 11:17:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.289 11:17:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.289 11:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:26.289 ************************************ 00:06:26.289 START TEST app_cmdline 00:06:26.289 ************************************ 00:06:26.289 11:17:42 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.548 * Looking for test storage... 00:06:26.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.548 11:17:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.548 11:17:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61694 00:06:26.548 11:17:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61694 00:06:26.548 11:17:42 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.548 11:17:42 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61694 ']' 00:06:26.548 11:17:42 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.548 11:17:42 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.548 11:17:42 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.548 11:17:42 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.548 11:17:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.548 [2024-07-25 11:17:42.321061] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:26.548 [2024-07-25 11:17:42.321585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61694 ] 00:06:26.806 [2024-07-25 11:17:42.495653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.066 [2024-07-25 11:17:42.750085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.000 11:17:43 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.000 11:17:43 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:28.000 11:17:43 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:28.000 { 00:06:28.000 "version": "SPDK v24.09-pre git sha1 86fd5638b", 00:06:28.000 "fields": { 00:06:28.000 "major": 24, 00:06:28.000 "minor": 9, 00:06:28.000 "patch": 0, 00:06:28.000 "suffix": "-pre", 00:06:28.000 "commit": "86fd5638b" 00:06:28.000 } 00:06:28.000 } 00:06:28.000 11:17:43 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.000 11:17:43 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.000 11:17:43 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.000 11:17:43 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.000 11:17:43 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.000 11:17:43 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.000 11:17:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.000 11:17:43 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.000 11:17:43 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.000 11:17:43 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.258 11:17:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.258 11:17:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.258 11:17:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.258 11:17:43 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:28.258 11:17:43 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.258 11:17:43 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.258 11:17:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.258 11:17:43 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.258 11:17:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.258 11:17:43 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.258 11:17:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.259 11:17:43 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.259 11:17:43 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:28.259 11:17:43 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.517 request: 00:06:28.517 { 00:06:28.517 "method": "env_dpdk_get_mem_stats", 00:06:28.517 "req_id": 1 00:06:28.517 } 00:06:28.517 Got JSON-RPC error response 00:06:28.517 response: 00:06:28.517 { 00:06:28.517 "code": -32601, 00:06:28.517 "message": "Method not found" 00:06:28.517 } 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.517 11:17:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61694 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61694 ']' 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61694 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61694 00:06:28.517 killing process with pid 61694 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61694' 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@969 -- # kill 61694 00:06:28.517 11:17:44 app_cmdline -- common/autotest_common.sh@974 -- # wait 61694 00:06:31.048 00:06:31.048 real 0m4.311s 00:06:31.048 user 0m4.680s 00:06:31.048 sys 0m0.625s 00:06:31.048 11:17:46 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.048 ************************************ 00:06:31.048 END TEST app_cmdline 00:06:31.048 11:17:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.048 ************************************ 00:06:31.048 11:17:46 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.048 11:17:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.048 11:17:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.048 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:06:31.048 ************************************ 00:06:31.048 START TEST version 00:06:31.048 ************************************ 00:06:31.048 11:17:46 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.048 * Looking for test storage... 00:06:31.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.048 11:17:46 version -- app/version.sh@17 -- # get_header_version major 00:06:31.048 11:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.048 11:17:46 version -- app/version.sh@14 -- # cut -f2 00:06:31.048 11:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.048 11:17:46 version -- app/version.sh@17 -- # major=24 00:06:31.048 11:17:46 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.048 11:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.048 11:17:46 version -- app/version.sh@14 -- # cut -f2 00:06:31.048 11:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.048 11:17:46 version -- app/version.sh@18 -- # minor=9 00:06:31.048 11:17:46 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.048 11:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.048 11:17:46 version -- app/version.sh@14 -- # cut -f2 00:06:31.048 11:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.048 11:17:46 version -- app/version.sh@19 -- # patch=0 00:06:31.048 11:17:46 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.048 11:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.048 11:17:46 version -- app/version.sh@14 -- # cut -f2 00:06:31.048 11:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.048 11:17:46 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.048 11:17:46 version -- app/version.sh@22 -- # version=24.9 00:06:31.048 11:17:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.048 11:17:46 version -- app/version.sh@28 -- # version=24.9rc0 00:06:31.048 11:17:46 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.048 11:17:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.048 11:17:46 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:31.048 11:17:46 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:31.048 00:06:31.048 real 0m0.146s 00:06:31.048 user 0m0.088s 00:06:31.048 sys 0m0.088s 00:06:31.049 11:17:46 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.049 11:17:46 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.049 ************************************ 00:06:31.049 END TEST version 00:06:31.049 ************************************ 00:06:31.049 11:17:46 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:31.049 11:17:46 -- spdk/autotest.sh@201 -- # [[ 1 -eq 1 ]] 00:06:31.049 11:17:46 -- spdk/autotest.sh@202 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:31.049 11:17:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.049 11:17:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.049 11:17:46 -- common/autotest_common.sh@10 -- # set +x 00:06:31.049 ************************************ 00:06:31.049 START TEST bdev_raid 00:06:31.049 ************************************ 00:06:31.049 11:17:46 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:31.049 * Looking for test storage... 00:06:31.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:31.049 11:17:46 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.049 11:17:46 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.049 11:17:46 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:06:31.049 11:17:46 bdev_raid -- bdev/bdev_raid.sh@927 -- # mkdir -p /raidtest 00:06:31.049 11:17:46 bdev_raid -- bdev/bdev_raid.sh@928 -- # trap 'cleanup; exit 1' EXIT 00:06:31.049 11:17:46 bdev_raid -- bdev/bdev_raid.sh@930 -- # base_blocklen=512 00:06:31.049 11:17:46 bdev_raid -- bdev/bdev_raid.sh@932 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:31.049 11:17:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:31.049 11:17:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.049 11:17:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:31.049 ************************************ 00:06:31.049 START TEST raid0_resize_superblock_test 00:06:31.049 ************************************ 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=0 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=61860 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 61860' 00:06:31.049 Process raid pid: 61860 00:06:31.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 61860 /var/tmp/spdk-raid.sock 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61860 ']' 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.049 11:17:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.049 [2024-07-25 11:17:46.886063] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:31.049 [2024-07-25 11:17:46.886241] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.307 [2024-07-25 11:17:47.062915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.566 [2024-07-25 11:17:47.307063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.824 [2024-07-25 11:17:47.511307] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.824 [2024-07-25 11:17:47.511377] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.141 11:17:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.141 11:17:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:32.141 11:17:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:06:33.093 malloc0 00:06:33.093 11:17:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:33.093 [2024-07-25 11:17:48.898620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:33.093 [2024-07-25 11:17:48.898745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:33.094 [2024-07-25 11:17:48.898784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:33.094 [2024-07-25 11:17:48.898802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:33.094 [2024-07-25 11:17:48.901752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:33.094 [2024-07-25 11:17:48.901795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:33.094 pt0 00:06:33.094 11:17:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:06:33.661 344c01be-29ad-4494-b60c-8e5b2eaaa0da 00:06:33.661 11:17:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:06:33.920 74ef4216-6315-47b0-85fb-a661293e63d6 00:06:33.920 11:17:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:06:34.179 b69b79e3-4d49-43b3-9a16-48f8bba3f4f3 00:06:34.179 11:17:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:06:34.179 11:17:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@884 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:06:34.438 [2024-07-25 11:17:50.086459] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 74ef4216-6315-47b0-85fb-a661293e63d6 is claimed 00:06:34.438 [2024-07-25 11:17:50.086673] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev b69b79e3-4d49-43b3-9a16-48f8bba3f4f3 is claimed 00:06:34.438 [2024-07-25 11:17:50.086926] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:34.438 [2024-07-25 11:17:50.086947] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:34.438 [2024-07-25 11:17:50.087323] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:34.438 [2024-07-25 11:17:50.087608] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:34.438 [2024-07-25 11:17:50.087664] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:34.438 [2024-07-25 11:17:50.087888] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.438 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:34.438 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:34.697 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:06:34.697 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:34.697 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:34.957 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:06:34.957 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:34.957 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:34.957 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:34.957 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:35.215 [2024-07-25 11:17:50.886952] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.215 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.215 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.215 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 245760 == 245760 )) 00:06:35.215 11:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:06:35.473 [2024-07-25 11:17:51.159036] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.473 [2024-07-25 11:17:51.159095] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '74ef4216-6315-47b0-85fb-a661293e63d6' was resized: old size 131072, new size 204800 00:06:35.473 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:06:35.731 [2024-07-25 11:17:51.447070] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.731 [2024-07-25 11:17:51.447124] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b69b79e3-4d49-43b3-9a16-48f8bba3f4f3' was resized: old size 131072, new size 204800 00:06:35.731 [2024-07-25 11:17:51.447171] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:35.731 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:35.731 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:06:35.989 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:06:35.989 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:35.989 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:06:36.247 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:06:36.247 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:36.247 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:36.247 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:36.247 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # jq '.[].num_blocks' 00:06:36.505 [2024-07-25 11:17:52.271411] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:36.505 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:36.505 11:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:36.505 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # (( 393216 == 393216 )) 00:06:36.505 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:06:36.763 [2024-07-25 11:17:52.519154] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:36.763 [2024-07-25 11:17:52.519267] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:36.763 [2024-07-25 11:17:52.519283] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:36.763 [2024-07-25 11:17:52.519304] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:36.763 [2024-07-25 11:17:52.519457] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:36.763 [2024-07-25 11:17:52.519570] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:36.763 [2024-07-25 11:17:52.519586] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:36.763 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:37.022 [2024-07-25 11:17:52.811225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:37.022 [2024-07-25 11:17:52.811346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.022 [2024-07-25 11:17:52.811393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:37.022 [2024-07-25 11:17:52.811410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.022 [2024-07-25 11:17:52.814365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.022 [2024-07-25 11:17:52.814408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:37.022 pt0 00:06:37.022 [2024-07-25 11:17:52.816832] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 74ef4216-6315-47b0-85fb-a661293e63d6 00:06:37.022 [2024-07-25 11:17:52.816896] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 74ef4216-6315-47b0-85fb-a661293e63d6 is claimed 00:06:37.022 [2024-07-25 11:17:52.817038] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b69b79e3-4d49-43b3-9a16-48f8bba3f4f3 00:06:37.022 [2024-07-25 11:17:52.817066] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev b69b79e3-4d49-43b3-9a16-48f8bba3f4f3 is claimed 00:06:37.022 [2024-07-25 11:17:52.817237] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b69b79e3-4d49-43b3-9a16-48f8bba3f4f3 (2) smaller than existing raid bdev Raid (3) 00:06:37.022 [2024-07-25 11:17:52.817287] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:37.022 [2024-07-25 11:17:52.817303] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:37.022 [2024-07-25 11:17:52.817628] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:37.022 [2024-07-25 11:17:52.818044] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:37.022 [2024-07-25 11:17:52.818180] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:37.022 [2024-07-25 11:17:52.818508] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.022 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:37.022 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:37.022 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:37.022 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # jq '.[].num_blocks' 00:06:37.279 [2024-07-25 11:17:53.079575] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:37.279 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:37.279 11:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:37.279 11:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # (( 393216 == 393216 )) 00:06:37.279 11:17:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 61860 00:06:37.279 11:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61860 ']' 00:06:37.279 11:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61860 00:06:37.279 11:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:37.279 11:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.279 11:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61860 00:06:37.279 killing process with pid 61860 00:06:37.279 11:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.280 11:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.280 11:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61860' 00:06:37.280 11:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 61860 00:06:37.280 [2024-07-25 11:17:53.127181] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:37.280 11:17:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 61860 00:06:37.280 [2024-07-25 11:17:53.127328] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:37.280 [2024-07-25 11:17:53.127428] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:37.280 [2024-07-25 11:17:53.127443] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:38.653 [2024-07-25 11:17:54.395300] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:40.026 ************************************ 00:06:40.026 END TEST raid0_resize_superblock_test 00:06:40.026 ************************************ 00:06:40.026 11:17:55 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:06:40.026 00:06:40.026 real 0m8.807s 00:06:40.026 user 0m12.902s 00:06:40.026 sys 0m1.130s 00:06:40.026 11:17:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.026 11:17:55 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.026 11:17:55 bdev_raid -- bdev/bdev_raid.sh@933 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:40.026 11:17:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:40.026 11:17:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.026 11:17:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:40.026 ************************************ 00:06:40.026 START TEST raid1_resize_superblock_test 00:06:40.026 ************************************ 00:06:40.026 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:40.026 11:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=1 00:06:40.026 Process raid pid: 62013 00:06:40.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:40.026 11:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=62013 00:06:40.026 11:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 62013' 00:06:40.026 11:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 62013 /var/tmp/spdk-raid.sock 00:06:40.026 11:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:40.026 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62013 ']' 00:06:40.026 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:40.026 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.027 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:40.027 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.027 11:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.027 [2024-07-25 11:17:55.734643] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:40.027 [2024-07-25 11:17:55.735789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.284 [2024-07-25 11:17:55.912471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.542 [2024-07-25 11:17:56.183868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.542 [2024-07-25 11:17:56.394001] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.542 [2024-07-25 11:17:56.394253] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.800 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.800 11:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:40.800 11:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:06:41.733 malloc0 00:06:41.733 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:41.991 [2024-07-25 11:17:57.688738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:41.991 [2024-07-25 11:17:57.688855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.991 [2024-07-25 11:17:57.688894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:41.991 [2024-07-25 11:17:57.688911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.992 [2024-07-25 11:17:57.691850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.992 [2024-07-25 11:17:57.691898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:41.992 pt0 00:06:41.992 11:17:57 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:06:42.559 355f310b-9dbc-4e57-964e-545c3e4490fc 00:06:42.559 11:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:06:42.559 69b1ca5f-99a8-47cf-8cd6-0d803064aa5d 00:06:42.818 11:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:06:42.818 af099c35-d784-416c-8e0d-b5a36a175aa7 00:06:43.076 11:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:06:43.076 11:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:06:43.076 [2024-07-25 11:17:58.928103] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 69b1ca5f-99a8-47cf-8cd6-0d803064aa5d is claimed 00:06:43.076 [2024-07-25 11:17:58.928279] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev af099c35-d784-416c-8e0d-b5a36a175aa7 is claimed 00:06:43.076 [2024-07-25 11:17:58.928552] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:43.076 [2024-07-25 11:17:58.928570] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:43.076 [2024-07-25 11:17:58.929218] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:43.076 [2024-07-25 11:17:58.929646] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:43.076 [2024-07-25 11:17:58.929795] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:43.076 [2024-07-25 11:17:58.930160] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.076 11:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:43.076 11:17:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:43.334 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:06:43.334 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:43.334 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:43.902 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:06:43.902 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.902 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:43.902 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.902 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:43.902 [2024-07-25 11:17:59.712570] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.902 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.902 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.902 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 122880 == 122880 )) 00:06:43.902 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:06:44.161 [2024-07-25 11:17:59.952644] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:44.161 [2024-07-25 11:17:59.952725] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '69b1ca5f-99a8-47cf-8cd6-0d803064aa5d' was resized: old size 131072, new size 204800 00:06:44.161 11:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:06:44.420 [2024-07-25 11:18:00.176640] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:44.420 [2024-07-25 11:18:00.176724] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'af099c35-d784-416c-8e0d-b5a36a175aa7' was resized: old size 131072, new size 204800 00:06:44.420 [2024-07-25 11:18:00.176785] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:44.420 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:06:44.420 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:06:44.680 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:06:44.680 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:06:44.680 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:06:44.938 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:06:44.938 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:44.938 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:44.938 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:44.938 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # jq '.[].num_blocks' 00:06:45.196 [2024-07-25 11:18:00.980977] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.196 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:45.196 11:18:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:06:45.196 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # (( 196608 == 196608 )) 00:06:45.196 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:06:45.455 [2024-07-25 11:18:01.220764] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:45.455 [2024-07-25 11:18:01.220891] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:45.455 [2024-07-25 11:18:01.220927] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:45.455 [2024-07-25 11:18:01.221144] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:45.455 [2024-07-25 11:18:01.221392] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.455 [2024-07-25 11:18:01.221497] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.455 [2024-07-25 11:18:01.221515] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:45.455 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:06:45.714 [2024-07-25 11:18:01.532770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:45.714 [2024-07-25 11:18:01.532890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:45.714 [2024-07-25 11:18:01.532925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:45.714 [2024-07-25 11:18:01.532942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:45.714 [2024-07-25 11:18:01.535752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:45.714 [2024-07-25 11:18:01.535813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:45.714 pt0 00:06:45.714 [2024-07-25 11:18:01.538253] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 69b1ca5f-99a8-47cf-8cd6-0d803064aa5d 00:06:45.714 [2024-07-25 11:18:01.538333] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 69b1ca5f-99a8-47cf-8cd6-0d803064aa5d is claimed 00:06:45.714 [2024-07-25 11:18:01.538475] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev af099c35-d784-416c-8e0d-b5a36a175aa7 00:06:45.714 [2024-07-25 11:18:01.538503] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev af099c35-d784-416c-8e0d-b5a36a175aa7 is claimed 00:06:45.714 [2024-07-25 11:18:01.538695] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev af099c35-d784-416c-8e0d-b5a36a175aa7 (2) smaller than existing raid bdev Raid (3) 00:06:45.714 [2024-07-25 11:18:01.538750] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:45.714 [2024-07-25 11:18:01.538765] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:45.714 [2024-07-25 11:18:01.539104] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:45.714 [2024-07-25 11:18:01.539315] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:45.714 [2024-07-25 11:18:01.539330] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:45.714 [2024-07-25 11:18:01.539518] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:45.714 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:45.714 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:45.714 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:45.714 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # jq '.[].num_blocks' 00:06:45.973 [2024-07-25 11:18:01.805182] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # (( 196608 == 196608 )) 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 62013 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62013 ']' 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62013 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62013 00:06:45.973 killing process with pid 62013 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62013' 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 62013 00:06:45.973 [2024-07-25 11:18:01.851832] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:45.973 11:18:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 62013 00:06:45.973 [2024-07-25 11:18:01.851939] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:45.973 [2024-07-25 11:18:01.852013] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:45.973 [2024-07-25 11:18:01.852027] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:47.368 [2024-07-25 11:18:03.136285] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:48.744 ************************************ 00:06:48.744 END TEST raid1_resize_superblock_test 00:06:48.744 ************************************ 00:06:48.744 11:18:04 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:06:48.744 00:06:48.744 real 0m8.657s 00:06:48.744 user 0m12.683s 00:06:48.744 sys 0m1.083s 00:06:48.744 11:18:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.744 11:18:04 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.744 11:18:04 bdev_raid -- bdev/bdev_raid.sh@935 -- # uname -s 00:06:48.744 11:18:04 bdev_raid -- bdev/bdev_raid.sh@935 -- # '[' Linux = Linux ']' 00:06:48.744 11:18:04 bdev_raid -- bdev/bdev_raid.sh@935 -- # modprobe -n nbd 00:06:48.744 11:18:04 bdev_raid -- bdev/bdev_raid.sh@936 -- # has_nbd=true 00:06:48.744 11:18:04 bdev_raid -- bdev/bdev_raid.sh@937 -- # modprobe nbd 00:06:48.744 11:18:04 bdev_raid -- bdev/bdev_raid.sh@938 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:48.744 11:18:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:48.744 11:18:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.744 11:18:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.744 ************************************ 00:06:48.744 START TEST raid_function_test_raid0 00:06:48.744 ************************************ 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:06:48.744 Process raid pid: 62175 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=62175 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 62175' 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 62175 /var/tmp/spdk-raid.sock 00:06:48.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 62175 ']' 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.744 11:18:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:48.744 [2024-07-25 11:18:04.455844] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:48.744 [2024-07-25 11:18:04.456069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.002 [2024-07-25 11:18:04.635599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.002 [2024-07-25 11:18:04.874006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.260 [2024-07-25 11:18:05.078138] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.260 [2024-07-25 11:18:05.078193] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.827 11:18:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.827 11:18:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:49.827 11:18:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:06:49.827 11:18:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:06:49.827 11:18:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:49.827 11:18:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:06:49.827 11:18:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:06:50.086 [2024-07-25 11:18:05.762583] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:50.086 [2024-07-25 11:18:05.765039] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:50.086 [2024-07-25 11:18:05.765162] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:50.086 [2024-07-25 11:18:05.765180] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:50.086 [2024-07-25 11:18:05.765540] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:50.086 [2024-07-25 11:18:05.765763] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:50.086 [2024-07-25 11:18:05.765816] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:50.086 [2024-07-25 11:18:05.765999] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.086 Base_1 00:06:50.086 Base_2 00:06:50.086 11:18:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:50.086 11:18:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:06:50.086 11:18:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.345 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:06:50.603 [2024-07-25 11:18:06.338782] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:50.603 /dev/nbd0 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.603 1+0 records in 00:06:50.603 1+0 records out 00:06:50.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362595 s, 11.3 MB/s 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:50.603 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:50.861 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.861 { 00:06:50.862 "nbd_device": "/dev/nbd0", 00:06:50.862 "bdev_name": "raid" 00:06:50.862 } 00:06:50.862 ]' 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.862 { 00:06:50.862 "nbd_device": "/dev/nbd0", 00:06:50.862 "bdev_name": "raid" 00:06:50.862 } 00:06:50.862 ]' 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:06:50.862 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:51.120 4096+0 records in 00:06:51.120 4096+0 records out 00:06:51.120 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0300457 s, 69.8 MB/s 00:06:51.120 11:18:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:51.379 4096+0 records in 00:06:51.379 4096+0 records out 00:06:51.379 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.339992 s, 6.2 MB/s 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:51.379 128+0 records in 00:06:51.379 128+0 records out 00:06:51.379 65536 bytes (66 kB, 64 KiB) copied, 0.00103369 s, 63.4 MB/s 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:51.379 2035+0 records in 00:06:51.379 2035+0 records out 00:06:51.379 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0112497 s, 92.6 MB/s 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:51.379 456+0 records in 00:06:51.379 456+0 records out 00:06:51.379 233472 bytes (233 kB, 228 KiB) copied, 0.00240612 s, 97.0 MB/s 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.379 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:06:51.638 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.638 [2024-07-25 11:18:07.517065] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.897 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 62175 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 62175 ']' 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 62175 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62175 00:06:52.156 killing process with pid 62175 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62175' 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 62175 00:06:52.156 [2024-07-25 11:18:07.852125] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.156 11:18:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 62175 00:06:52.156 [2024-07-25 11:18:07.852242] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.156 [2024-07-25 11:18:07.852314] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.156 [2024-07-25 11:18:07.852330] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:52.414 [2024-07-25 11:18:08.039239] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.350 11:18:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:06:53.350 00:06:53.350 real 0m4.863s 00:06:53.350 user 0m6.197s 00:06:53.350 sys 0m1.111s 00:06:53.350 ************************************ 00:06:53.350 END TEST raid_function_test_raid0 00:06:53.350 ************************************ 00:06:53.350 11:18:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.350 11:18:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:53.610 11:18:09 bdev_raid -- bdev/bdev_raid.sh@939 -- # run_test raid_function_test_concat raid_function_test concat 00:06:53.610 11:18:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.610 11:18:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.610 11:18:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.610 ************************************ 00:06:53.610 START TEST raid_function_test_concat 00:06:53.610 ************************************ 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:06:53.610 Process raid pid: 62308 00:06:53.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=62308 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 62308' 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 62308 /var/tmp/spdk-raid.sock 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 62308 ']' 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.610 11:18:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:53.610 [2024-07-25 11:18:09.355591] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:53.610 [2024-07-25 11:18:09.356034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.900 [2024-07-25 11:18:09.517866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.164 [2024-07-25 11:18:09.760377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.164 [2024-07-25 11:18:09.967664] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.164 [2024-07-25 11:18:09.967924] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.423 11:18:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.423 11:18:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:54.423 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:06:54.423 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:06:54.423 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:54.423 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:06:54.423 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:06:54.991 [2024-07-25 11:18:10.627800] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:54.991 [2024-07-25 11:18:10.630099] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:54.991 [2024-07-25 11:18:10.630213] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:54.991 [2024-07-25 11:18:10.630231] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:54.991 [2024-07-25 11:18:10.630556] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:54.991 [2024-07-25 11:18:10.630764] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:54.991 [2024-07-25 11:18:10.630786] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:54.991 [2024-07-25 11:18:10.630990] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.991 Base_1 00:06:54.991 Base_2 00:06:54.991 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:06:54.991 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:06:54.991 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:55.254 11:18:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:06:55.513 [2024-07-25 11:18:11.192078] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:55.513 /dev/nbd0 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.513 1+0 records in 00:06:55.513 1+0 records out 00:06:55.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365187 s, 11.2 MB/s 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:55.513 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.773 { 00:06:55.773 "nbd_device": "/dev/nbd0", 00:06:55.773 "bdev_name": "raid" 00:06:55.773 } 00:06:55.773 ]' 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.773 { 00:06:55.773 "nbd_device": "/dev/nbd0", 00:06:55.773 "bdev_name": "raid" 00:06:55.773 } 00:06:55.773 ]' 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:55.773 4096+0 records in 00:06:55.773 4096+0 records out 00:06:55.773 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0438024 s, 47.9 MB/s 00:06:55.773 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:56.343 4096+0 records in 00:06:56.343 4096+0 records out 00:06:56.343 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.34605 s, 6.1 MB/s 00:06:56.343 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:06:56.343 11:18:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:56.343 128+0 records in 00:06:56.343 128+0 records out 00:06:56.343 65536 bytes (66 kB, 64 KiB) copied, 0.00108869 s, 60.2 MB/s 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:56.343 2035+0 records in 00:06:56.343 2035+0 records out 00:06:56.343 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00862328 s, 121 MB/s 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:56.343 456+0 records in 00:06:56.343 456+0 records out 00:06:56.343 233472 bytes (233 kB, 228 KiB) copied, 0.00339806 s, 68.7 MB/s 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.343 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.603 [2024-07-25 11:18:12.396596] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:06:56.603 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 62308 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 62308 ']' 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 62308 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62308 00:06:56.862 killing process with pid 62308 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62308' 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 62308 00:06:56.862 11:18:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 62308 00:06:56.862 [2024-07-25 11:18:12.731595] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:56.862 [2024-07-25 11:18:12.731727] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.862 [2024-07-25 11:18:12.731798] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.862 [2024-07-25 11:18:12.731814] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:57.121 [2024-07-25 11:18:12.921588] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.497 11:18:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:06:58.497 00:06:58.497 real 0m4.843s 00:06:58.497 user 0m6.125s 00:06:58.497 sys 0m1.095s 00:06:58.497 11:18:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.497 11:18:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:58.497 ************************************ 00:06:58.497 END TEST raid_function_test_concat 00:06:58.497 ************************************ 00:06:58.497 11:18:14 bdev_raid -- bdev/bdev_raid.sh@942 -- # run_test raid0_resize_test raid_resize_test 0 00:06:58.497 11:18:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:58.497 11:18:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.497 11:18:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.497 ************************************ 00:06:58.497 START TEST raid0_resize_test 00:06:58.497 ************************************ 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:06:58.497 Process raid pid: 62452 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=62452 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 62452' 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 62452 /var/tmp/spdk-raid.sock 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 62452 ']' 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:58.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.497 11:18:14 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.497 [2024-07-25 11:18:14.265139] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:06:58.497 [2024-07-25 11:18:14.265343] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.756 [2024-07-25 11:18:14.441446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.016 [2024-07-25 11:18:14.682607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.016 [2024-07-25 11:18:14.890357] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.016 [2024-07-25 11:18:14.890415] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.275 11:18:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.275 11:18:15 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:59.275 11:18:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:59.534 Base_1 00:06:59.534 11:18:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:59.794 Base_2 00:06:59.794 11:18:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:06:59.794 11:18:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:07:00.052 [2024-07-25 11:18:15.840459] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:00.052 [2024-07-25 11:18:15.842859] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:00.052 [2024-07-25 11:18:15.842976] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.052 [2024-07-25 11:18:15.842994] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:00.052 [2024-07-25 11:18:15.843366] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:00.052 [2024-07-25 11:18:15.843546] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.053 [2024-07-25 11:18:15.843567] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:00.053 [2024-07-25 11:18:15.843807] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.053 11:18:15 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:07:00.312 [2024-07-25 11:18:16.088488] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.312 [2024-07-25 11:18:16.088535] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:00.312 true 00:07:00.312 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:00.312 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:07:00.571 [2024-07-25 11:18:16.320766] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.571 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:07:00.571 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:07:00.571 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:07:00.571 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:07:00.571 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:07:00.571 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:07:00.933 [2024-07-25 11:18:16.608599] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.933 [2024-07-25 11:18:16.608667] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:00.933 [2024-07-25 11:18:16.608714] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:00.933 true 00:07:00.933 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:00.933 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:07:01.194 [2024-07-25 11:18:16.900897] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 62452 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 62452 ']' 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 62452 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62452 00:07:01.194 killing process with pid 62452 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62452' 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 62452 00:07:01.194 [2024-07-25 11:18:16.962174] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.194 11:18:16 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 62452 00:07:01.194 [2024-07-25 11:18:16.962294] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.194 [2024-07-25 11:18:16.962361] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.194 [2024-07-25 11:18:16.962381] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:01.194 [2024-07-25 11:18:16.977625] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.570 ************************************ 00:07:02.570 END TEST raid0_resize_test 00:07:02.570 ************************************ 00:07:02.570 11:18:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:07:02.570 00:07:02.570 real 0m3.983s 00:07:02.570 user 0m5.562s 00:07:02.570 sys 0m0.602s 00:07:02.570 11:18:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.570 11:18:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.570 11:18:18 bdev_raid -- bdev/bdev_raid.sh@943 -- # run_test raid1_resize_test raid_resize_test 1 00:07:02.570 11:18:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.570 11:18:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.570 11:18:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.570 ************************************ 00:07:02.570 START TEST raid1_resize_test 00:07:02.570 ************************************ 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:07:02.570 Process raid pid: 62535 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=62535 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 62535' 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 62535 /var/tmp/spdk-raid.sock 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 62535 ']' 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:02.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.570 11:18:18 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.570 [2024-07-25 11:18:18.296919] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:02.570 [2024-07-25 11:18:18.297377] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.829 [2024-07-25 11:18:18.471560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.088 [2024-07-25 11:18:18.711829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.088 [2024-07-25 11:18:18.918115] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.088 [2024-07-25 11:18:18.918172] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.655 11:18:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.655 11:18:19 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:03.655 11:18:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:07:03.655 Base_1 00:07:03.655 11:18:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:07:03.914 Base_2 00:07:03.914 11:18:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:07:03.914 11:18:19 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:07:04.172 [2024-07-25 11:18:20.000173] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:04.172 [2024-07-25 11:18:20.002519] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:04.172 [2024-07-25 11:18:20.002646] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:04.172 [2024-07-25 11:18:20.002664] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:04.172 [2024-07-25 11:18:20.003055] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:04.172 [2024-07-25 11:18:20.003247] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:04.172 [2024-07-25 11:18:20.003272] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:04.172 [2024-07-25 11:18:20.003479] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.172 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:07:04.430 [2024-07-25 11:18:20.268182] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.431 [2024-07-25 11:18:20.268232] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:04.431 true 00:07:04.431 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:04.431 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:07:04.688 [2024-07-25 11:18:20.532444] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.688 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:07:04.688 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:07:04.688 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:07:04.688 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:07:04.688 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:07:04.689 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:07:04.946 [2024-07-25 11:18:20.804311] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.946 [2024-07-25 11:18:20.804575] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:04.946 [2024-07-25 11:18:20.804793] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:04.946 true 00:07:04.946 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:04.946 11:18:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:07:05.204 [2024-07-25 11:18:21.072585] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 62535 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 62535 ']' 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 62535 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62535 00:07:05.463 killing process with pid 62535 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62535' 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 62535 00:07:05.463 [2024-07-25 11:18:21.120333] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.463 11:18:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 62535 00:07:05.463 [2024-07-25 11:18:21.120431] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.463 [2024-07-25 11:18:21.121028] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.463 [2024-07-25 11:18:21.121060] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:05.463 [2024-07-25 11:18:21.135934] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.838 ************************************ 00:07:06.838 END TEST raid1_resize_test 00:07:06.838 ************************************ 00:07:06.838 11:18:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:07:06.838 00:07:06.838 real 0m4.105s 00:07:06.838 user 0m5.804s 00:07:06.838 sys 0m0.621s 00:07:06.838 11:18:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.838 11:18:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.838 11:18:22 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:07:06.838 11:18:22 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:07:06.838 11:18:22 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:06.838 11:18:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:06.838 11:18:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.838 11:18:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.838 ************************************ 00:07:06.839 START TEST raid_state_function_test 00:07:06.839 ************************************ 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:06.839 Process raid pid: 62619 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=62619 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 62619' 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 62619 /var/tmp/spdk-raid.sock 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62619 ']' 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.839 11:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.839 [2024-07-25 11:18:22.456163] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:06.839 [2024-07-25 11:18:22.456637] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.839 [2024-07-25 11:18:22.631036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.098 [2024-07-25 11:18:22.869178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.357 [2024-07-25 11:18:23.075336] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.357 [2024-07-25 11:18:23.075393] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.619 11:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.619 11:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:07.619 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:07.877 [2024-07-25 11:18:23.634312] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:07.877 [2024-07-25 11:18:23.634390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:07.877 [2024-07-25 11:18:23.634410] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.877 [2024-07-25 11:18:23.634424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:07.877 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.136 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:08.136 "name": "Existed_Raid", 00:07:08.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.136 "strip_size_kb": 64, 00:07:08.136 "state": "configuring", 00:07:08.136 "raid_level": "raid0", 00:07:08.136 "superblock": false, 00:07:08.136 "num_base_bdevs": 2, 00:07:08.136 "num_base_bdevs_discovered": 0, 00:07:08.136 "num_base_bdevs_operational": 2, 00:07:08.136 "base_bdevs_list": [ 00:07:08.136 { 00:07:08.136 "name": "BaseBdev1", 00:07:08.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.136 "is_configured": false, 00:07:08.136 "data_offset": 0, 00:07:08.136 "data_size": 0 00:07:08.136 }, 00:07:08.136 { 00:07:08.136 "name": "BaseBdev2", 00:07:08.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.136 "is_configured": false, 00:07:08.136 "data_offset": 0, 00:07:08.136 "data_size": 0 00:07:08.136 } 00:07:08.136 ] 00:07:08.136 }' 00:07:08.136 11:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:08.136 11:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.701 11:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:08.958 [2024-07-25 11:18:24.746438] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.958 [2024-07-25 11:18:24.746480] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:08.958 11:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:09.217 [2024-07-25 11:18:24.982543] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:09.217 [2024-07-25 11:18:24.982608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:09.217 [2024-07-25 11:18:24.982650] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.217 [2024-07-25 11:18:24.982666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.217 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:09.475 [2024-07-25 11:18:25.295239] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.475 BaseBdev1 00:07:09.475 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:09.475 11:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:09.475 11:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:09.475 11:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:09.475 11:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:09.475 11:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:09.475 11:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:09.733 11:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:09.991 [ 00:07:09.991 { 00:07:09.991 "name": "BaseBdev1", 00:07:09.991 "aliases": [ 00:07:09.991 "700498b7-cbff-4244-9aa7-603d59336513" 00:07:09.991 ], 00:07:09.991 "product_name": "Malloc disk", 00:07:09.991 "block_size": 512, 00:07:09.991 "num_blocks": 65536, 00:07:09.991 "uuid": "700498b7-cbff-4244-9aa7-603d59336513", 00:07:09.991 "assigned_rate_limits": { 00:07:09.991 "rw_ios_per_sec": 0, 00:07:09.991 "rw_mbytes_per_sec": 0, 00:07:09.991 "r_mbytes_per_sec": 0, 00:07:09.991 "w_mbytes_per_sec": 0 00:07:09.991 }, 00:07:09.991 "claimed": true, 00:07:09.991 "claim_type": "exclusive_write", 00:07:09.991 "zoned": false, 00:07:09.991 "supported_io_types": { 00:07:09.991 "read": true, 00:07:09.991 "write": true, 00:07:09.991 "unmap": true, 00:07:09.991 "flush": true, 00:07:09.991 "reset": true, 00:07:09.991 "nvme_admin": false, 00:07:09.991 "nvme_io": false, 00:07:09.991 "nvme_io_md": false, 00:07:09.991 "write_zeroes": true, 00:07:09.991 "zcopy": true, 00:07:09.991 "get_zone_info": false, 00:07:09.991 "zone_management": false, 00:07:09.991 "zone_append": false, 00:07:09.991 "compare": false, 00:07:09.991 "compare_and_write": false, 00:07:09.991 "abort": true, 00:07:09.991 "seek_hole": false, 00:07:09.991 "seek_data": false, 00:07:09.991 "copy": true, 00:07:09.991 "nvme_iov_md": false 00:07:09.991 }, 00:07:09.991 "memory_domains": [ 00:07:09.991 { 00:07:09.991 "dma_device_id": "system", 00:07:09.991 "dma_device_type": 1 00:07:09.991 }, 00:07:09.991 { 00:07:09.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.991 "dma_device_type": 2 00:07:09.991 } 00:07:09.991 ], 00:07:09.991 "driver_specific": {} 00:07:09.991 } 00:07:09.991 ] 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:09.991 11:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.250 11:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:10.250 "name": "Existed_Raid", 00:07:10.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.250 "strip_size_kb": 64, 00:07:10.250 "state": "configuring", 00:07:10.250 "raid_level": "raid0", 00:07:10.250 "superblock": false, 00:07:10.250 "num_base_bdevs": 2, 00:07:10.250 "num_base_bdevs_discovered": 1, 00:07:10.250 "num_base_bdevs_operational": 2, 00:07:10.250 "base_bdevs_list": [ 00:07:10.250 { 00:07:10.250 "name": "BaseBdev1", 00:07:10.250 "uuid": "700498b7-cbff-4244-9aa7-603d59336513", 00:07:10.250 "is_configured": true, 00:07:10.250 "data_offset": 0, 00:07:10.250 "data_size": 65536 00:07:10.250 }, 00:07:10.250 { 00:07:10.250 "name": "BaseBdev2", 00:07:10.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.250 "is_configured": false, 00:07:10.250 "data_offset": 0, 00:07:10.250 "data_size": 0 00:07:10.250 } 00:07:10.250 ] 00:07:10.250 }' 00:07:10.250 11:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:10.250 11:18:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.183 11:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:11.183 [2024-07-25 11:18:26.919803] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.183 [2024-07-25 11:18:26.919878] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:11.183 11:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:11.441 [2024-07-25 11:18:27.139859] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.441 [2024-07-25 11:18:27.142264] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.441 [2024-07-25 11:18:27.142317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.441 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.752 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:11.752 "name": "Existed_Raid", 00:07:11.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.752 "strip_size_kb": 64, 00:07:11.752 "state": "configuring", 00:07:11.752 "raid_level": "raid0", 00:07:11.752 "superblock": false, 00:07:11.752 "num_base_bdevs": 2, 00:07:11.752 "num_base_bdevs_discovered": 1, 00:07:11.752 "num_base_bdevs_operational": 2, 00:07:11.752 "base_bdevs_list": [ 00:07:11.752 { 00:07:11.752 "name": "BaseBdev1", 00:07:11.752 "uuid": "700498b7-cbff-4244-9aa7-603d59336513", 00:07:11.752 "is_configured": true, 00:07:11.752 "data_offset": 0, 00:07:11.752 "data_size": 65536 00:07:11.752 }, 00:07:11.752 { 00:07:11.752 "name": "BaseBdev2", 00:07:11.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.752 "is_configured": false, 00:07:11.752 "data_offset": 0, 00:07:11.752 "data_size": 0 00:07:11.752 } 00:07:11.752 ] 00:07:11.752 }' 00:07:11.752 11:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:11.752 11:18:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:12.579 [2024-07-25 11:18:28.334695] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.579 [2024-07-25 11:18:28.334749] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:12.579 [2024-07-25 11:18:28.334767] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:12.579 [2024-07-25 11:18:28.335113] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:12.579 [2024-07-25 11:18:28.335318] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:12.579 [2024-07-25 11:18:28.335334] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:12.579 [2024-07-25 11:18:28.335691] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.579 BaseBdev2 00:07:12.579 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:12.579 11:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:12.579 11:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:12.579 11:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:12.579 11:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:12.579 11:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:12.579 11:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:12.838 11:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:13.096 [ 00:07:13.096 { 00:07:13.096 "name": "BaseBdev2", 00:07:13.096 "aliases": [ 00:07:13.096 "f4ca3e89-d209-4fe6-b50d-87d84d1461fb" 00:07:13.096 ], 00:07:13.096 "product_name": "Malloc disk", 00:07:13.096 "block_size": 512, 00:07:13.096 "num_blocks": 65536, 00:07:13.096 "uuid": "f4ca3e89-d209-4fe6-b50d-87d84d1461fb", 00:07:13.096 "assigned_rate_limits": { 00:07:13.096 "rw_ios_per_sec": 0, 00:07:13.096 "rw_mbytes_per_sec": 0, 00:07:13.096 "r_mbytes_per_sec": 0, 00:07:13.096 "w_mbytes_per_sec": 0 00:07:13.096 }, 00:07:13.096 "claimed": true, 00:07:13.096 "claim_type": "exclusive_write", 00:07:13.096 "zoned": false, 00:07:13.096 "supported_io_types": { 00:07:13.096 "read": true, 00:07:13.096 "write": true, 00:07:13.096 "unmap": true, 00:07:13.096 "flush": true, 00:07:13.096 "reset": true, 00:07:13.096 "nvme_admin": false, 00:07:13.096 "nvme_io": false, 00:07:13.096 "nvme_io_md": false, 00:07:13.096 "write_zeroes": true, 00:07:13.096 "zcopy": true, 00:07:13.096 "get_zone_info": false, 00:07:13.096 "zone_management": false, 00:07:13.096 "zone_append": false, 00:07:13.096 "compare": false, 00:07:13.096 "compare_and_write": false, 00:07:13.096 "abort": true, 00:07:13.096 "seek_hole": false, 00:07:13.096 "seek_data": false, 00:07:13.096 "copy": true, 00:07:13.096 "nvme_iov_md": false 00:07:13.096 }, 00:07:13.096 "memory_domains": [ 00:07:13.096 { 00:07:13.096 "dma_device_id": "system", 00:07:13.096 "dma_device_type": 1 00:07:13.096 }, 00:07:13.096 { 00:07:13.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.096 "dma_device_type": 2 00:07:13.096 } 00:07:13.096 ], 00:07:13.096 "driver_specific": {} 00:07:13.096 } 00:07:13.096 ] 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:13.096 11:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.355 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:13.355 "name": "Existed_Raid", 00:07:13.355 "uuid": "37253e12-514f-4722-a397-460ba21a0215", 00:07:13.355 "strip_size_kb": 64, 00:07:13.355 "state": "online", 00:07:13.355 "raid_level": "raid0", 00:07:13.355 "superblock": false, 00:07:13.355 "num_base_bdevs": 2, 00:07:13.355 "num_base_bdevs_discovered": 2, 00:07:13.355 "num_base_bdevs_operational": 2, 00:07:13.355 "base_bdevs_list": [ 00:07:13.355 { 00:07:13.355 "name": "BaseBdev1", 00:07:13.355 "uuid": "700498b7-cbff-4244-9aa7-603d59336513", 00:07:13.355 "is_configured": true, 00:07:13.355 "data_offset": 0, 00:07:13.355 "data_size": 65536 00:07:13.355 }, 00:07:13.355 { 00:07:13.355 "name": "BaseBdev2", 00:07:13.355 "uuid": "f4ca3e89-d209-4fe6-b50d-87d84d1461fb", 00:07:13.355 "is_configured": true, 00:07:13.355 "data_offset": 0, 00:07:13.355 "data_size": 65536 00:07:13.355 } 00:07:13.355 ] 00:07:13.355 }' 00:07:13.355 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:13.355 11:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.921 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:13.921 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:13.921 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:13.921 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:13.921 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:13.921 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:13.921 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:13.921 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:14.180 [2024-07-25 11:18:29.979536] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.180 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:14.180 "name": "Existed_Raid", 00:07:14.180 "aliases": [ 00:07:14.180 "37253e12-514f-4722-a397-460ba21a0215" 00:07:14.180 ], 00:07:14.180 "product_name": "Raid Volume", 00:07:14.180 "block_size": 512, 00:07:14.180 "num_blocks": 131072, 00:07:14.180 "uuid": "37253e12-514f-4722-a397-460ba21a0215", 00:07:14.180 "assigned_rate_limits": { 00:07:14.180 "rw_ios_per_sec": 0, 00:07:14.180 "rw_mbytes_per_sec": 0, 00:07:14.180 "r_mbytes_per_sec": 0, 00:07:14.180 "w_mbytes_per_sec": 0 00:07:14.180 }, 00:07:14.180 "claimed": false, 00:07:14.180 "zoned": false, 00:07:14.180 "supported_io_types": { 00:07:14.180 "read": true, 00:07:14.180 "write": true, 00:07:14.180 "unmap": true, 00:07:14.180 "flush": true, 00:07:14.180 "reset": true, 00:07:14.180 "nvme_admin": false, 00:07:14.180 "nvme_io": false, 00:07:14.180 "nvme_io_md": false, 00:07:14.180 "write_zeroes": true, 00:07:14.180 "zcopy": false, 00:07:14.180 "get_zone_info": false, 00:07:14.180 "zone_management": false, 00:07:14.180 "zone_append": false, 00:07:14.180 "compare": false, 00:07:14.180 "compare_and_write": false, 00:07:14.180 "abort": false, 00:07:14.180 "seek_hole": false, 00:07:14.180 "seek_data": false, 00:07:14.180 "copy": false, 00:07:14.180 "nvme_iov_md": false 00:07:14.180 }, 00:07:14.180 "memory_domains": [ 00:07:14.180 { 00:07:14.180 "dma_device_id": "system", 00:07:14.180 "dma_device_type": 1 00:07:14.180 }, 00:07:14.180 { 00:07:14.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.180 "dma_device_type": 2 00:07:14.180 }, 00:07:14.180 { 00:07:14.180 "dma_device_id": "system", 00:07:14.180 "dma_device_type": 1 00:07:14.180 }, 00:07:14.180 { 00:07:14.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.180 "dma_device_type": 2 00:07:14.180 } 00:07:14.180 ], 00:07:14.180 "driver_specific": { 00:07:14.180 "raid": { 00:07:14.180 "uuid": "37253e12-514f-4722-a397-460ba21a0215", 00:07:14.180 "strip_size_kb": 64, 00:07:14.180 "state": "online", 00:07:14.180 "raid_level": "raid0", 00:07:14.180 "superblock": false, 00:07:14.180 "num_base_bdevs": 2, 00:07:14.180 "num_base_bdevs_discovered": 2, 00:07:14.180 "num_base_bdevs_operational": 2, 00:07:14.180 "base_bdevs_list": [ 00:07:14.180 { 00:07:14.180 "name": "BaseBdev1", 00:07:14.180 "uuid": "700498b7-cbff-4244-9aa7-603d59336513", 00:07:14.180 "is_configured": true, 00:07:14.180 "data_offset": 0, 00:07:14.180 "data_size": 65536 00:07:14.180 }, 00:07:14.180 { 00:07:14.180 "name": "BaseBdev2", 00:07:14.180 "uuid": "f4ca3e89-d209-4fe6-b50d-87d84d1461fb", 00:07:14.180 "is_configured": true, 00:07:14.180 "data_offset": 0, 00:07:14.180 "data_size": 65536 00:07:14.180 } 00:07:14.180 ] 00:07:14.180 } 00:07:14.180 } 00:07:14.180 }' 00:07:14.180 11:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:14.180 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:14.180 BaseBdev2' 00:07:14.180 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:14.180 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:14.180 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:14.439 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:14.439 "name": "BaseBdev1", 00:07:14.439 "aliases": [ 00:07:14.439 "700498b7-cbff-4244-9aa7-603d59336513" 00:07:14.439 ], 00:07:14.439 "product_name": "Malloc disk", 00:07:14.439 "block_size": 512, 00:07:14.439 "num_blocks": 65536, 00:07:14.439 "uuid": "700498b7-cbff-4244-9aa7-603d59336513", 00:07:14.439 "assigned_rate_limits": { 00:07:14.439 "rw_ios_per_sec": 0, 00:07:14.439 "rw_mbytes_per_sec": 0, 00:07:14.439 "r_mbytes_per_sec": 0, 00:07:14.439 "w_mbytes_per_sec": 0 00:07:14.439 }, 00:07:14.439 "claimed": true, 00:07:14.439 "claim_type": "exclusive_write", 00:07:14.439 "zoned": false, 00:07:14.439 "supported_io_types": { 00:07:14.439 "read": true, 00:07:14.439 "write": true, 00:07:14.439 "unmap": true, 00:07:14.439 "flush": true, 00:07:14.439 "reset": true, 00:07:14.439 "nvme_admin": false, 00:07:14.439 "nvme_io": false, 00:07:14.439 "nvme_io_md": false, 00:07:14.439 "write_zeroes": true, 00:07:14.439 "zcopy": true, 00:07:14.439 "get_zone_info": false, 00:07:14.439 "zone_management": false, 00:07:14.439 "zone_append": false, 00:07:14.439 "compare": false, 00:07:14.439 "compare_and_write": false, 00:07:14.439 "abort": true, 00:07:14.439 "seek_hole": false, 00:07:14.439 "seek_data": false, 00:07:14.439 "copy": true, 00:07:14.439 "nvme_iov_md": false 00:07:14.439 }, 00:07:14.439 "memory_domains": [ 00:07:14.439 { 00:07:14.439 "dma_device_id": "system", 00:07:14.439 "dma_device_type": 1 00:07:14.439 }, 00:07:14.439 { 00:07:14.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.439 "dma_device_type": 2 00:07:14.439 } 00:07:14.439 ], 00:07:14.439 "driver_specific": {} 00:07:14.439 }' 00:07:14.439 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:14.698 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:14.698 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:14.698 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:14.698 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:14.698 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:14.698 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:14.698 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:14.956 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:14.956 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:14.956 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:14.956 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:14.956 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:14.956 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:14.956 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:15.214 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:15.214 "name": "BaseBdev2", 00:07:15.214 "aliases": [ 00:07:15.214 "f4ca3e89-d209-4fe6-b50d-87d84d1461fb" 00:07:15.214 ], 00:07:15.214 "product_name": "Malloc disk", 00:07:15.214 "block_size": 512, 00:07:15.214 "num_blocks": 65536, 00:07:15.214 "uuid": "f4ca3e89-d209-4fe6-b50d-87d84d1461fb", 00:07:15.214 "assigned_rate_limits": { 00:07:15.214 "rw_ios_per_sec": 0, 00:07:15.214 "rw_mbytes_per_sec": 0, 00:07:15.214 "r_mbytes_per_sec": 0, 00:07:15.214 "w_mbytes_per_sec": 0 00:07:15.214 }, 00:07:15.214 "claimed": true, 00:07:15.214 "claim_type": "exclusive_write", 00:07:15.214 "zoned": false, 00:07:15.214 "supported_io_types": { 00:07:15.214 "read": true, 00:07:15.214 "write": true, 00:07:15.214 "unmap": true, 00:07:15.214 "flush": true, 00:07:15.214 "reset": true, 00:07:15.214 "nvme_admin": false, 00:07:15.214 "nvme_io": false, 00:07:15.214 "nvme_io_md": false, 00:07:15.214 "write_zeroes": true, 00:07:15.214 "zcopy": true, 00:07:15.214 "get_zone_info": false, 00:07:15.214 "zone_management": false, 00:07:15.214 "zone_append": false, 00:07:15.214 "compare": false, 00:07:15.214 "compare_and_write": false, 00:07:15.214 "abort": true, 00:07:15.214 "seek_hole": false, 00:07:15.214 "seek_data": false, 00:07:15.214 "copy": true, 00:07:15.214 "nvme_iov_md": false 00:07:15.214 }, 00:07:15.214 "memory_domains": [ 00:07:15.214 { 00:07:15.214 "dma_device_id": "system", 00:07:15.214 "dma_device_type": 1 00:07:15.214 }, 00:07:15.214 { 00:07:15.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.214 "dma_device_type": 2 00:07:15.214 } 00:07:15.214 ], 00:07:15.214 "driver_specific": {} 00:07:15.214 }' 00:07:15.214 11:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:15.214 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:15.214 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:15.214 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:15.471 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:15.471 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:15.471 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:15.471 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:15.471 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:15.471 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:15.471 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:15.471 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:15.471 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:15.729 [2024-07-25 11:18:31.543732] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:15.729 [2024-07-25 11:18:31.543775] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.729 [2024-07-25 11:18:31.543863] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:15.988 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.246 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:16.246 "name": "Existed_Raid", 00:07:16.246 "uuid": "37253e12-514f-4722-a397-460ba21a0215", 00:07:16.246 "strip_size_kb": 64, 00:07:16.246 "state": "offline", 00:07:16.246 "raid_level": "raid0", 00:07:16.246 "superblock": false, 00:07:16.246 "num_base_bdevs": 2, 00:07:16.246 "num_base_bdevs_discovered": 1, 00:07:16.246 "num_base_bdevs_operational": 1, 00:07:16.246 "base_bdevs_list": [ 00:07:16.246 { 00:07:16.246 "name": null, 00:07:16.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.246 "is_configured": false, 00:07:16.246 "data_offset": 0, 00:07:16.246 "data_size": 65536 00:07:16.246 }, 00:07:16.246 { 00:07:16.246 "name": "BaseBdev2", 00:07:16.246 "uuid": "f4ca3e89-d209-4fe6-b50d-87d84d1461fb", 00:07:16.246 "is_configured": true, 00:07:16.246 "data_offset": 0, 00:07:16.246 "data_size": 65536 00:07:16.246 } 00:07:16.246 ] 00:07:16.246 }' 00:07:16.246 11:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:16.246 11:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.811 11:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:16.811 11:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:16.811 11:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.811 11:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:17.069 11:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:17.069 11:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:17.069 11:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:17.327 [2024-07-25 11:18:32.995370] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:17.327 [2024-07-25 11:18:32.995454] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:17.327 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:17.327 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:17.328 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.328 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 62619 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62619 ']' 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62619 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62619 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.586 killing process with pid 62619 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62619' 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62619 00:07:17.586 [2024-07-25 11:18:33.353683] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.586 11:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62619 00:07:17.586 [2024-07-25 11:18:33.368317] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:18.960 00:07:18.960 real 0m12.204s 00:07:18.960 user 0m21.190s 00:07:18.960 sys 0m1.555s 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.960 ************************************ 00:07:18.960 END TEST raid_state_function_test 00:07:18.960 ************************************ 00:07:18.960 11:18:34 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:18.960 11:18:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:18.960 11:18:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.960 11:18:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.960 ************************************ 00:07:18.960 START TEST raid_state_function_test_sb 00:07:18.960 ************************************ 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:18.960 Process raid pid: 62987 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=62987 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 62987' 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 62987 /var/tmp/spdk-raid.sock 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62987 ']' 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.960 11:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.960 [2024-07-25 11:18:34.721843] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:18.960 [2024-07-25 11:18:34.721984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.219 [2024-07-25 11:18:34.888793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.477 [2024-07-25 11:18:35.129523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.477 [2024-07-25 11:18:35.337719] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.477 [2024-07-25 11:18:35.337774] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:20.044 [2024-07-25 11:18:35.882140] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:20.044 [2024-07-25 11:18:35.882220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:20.044 [2024-07-25 11:18:35.882242] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.044 [2024-07-25 11:18:35.882256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:20.044 11:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.611 11:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:20.611 "name": "Existed_Raid", 00:07:20.611 "uuid": "22bb14ad-7ff5-47ff-b3a8-688015ff885f", 00:07:20.611 "strip_size_kb": 64, 00:07:20.611 "state": "configuring", 00:07:20.611 "raid_level": "raid0", 00:07:20.611 "superblock": true, 00:07:20.611 "num_base_bdevs": 2, 00:07:20.611 "num_base_bdevs_discovered": 0, 00:07:20.611 "num_base_bdevs_operational": 2, 00:07:20.611 "base_bdevs_list": [ 00:07:20.611 { 00:07:20.611 "name": "BaseBdev1", 00:07:20.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.611 "is_configured": false, 00:07:20.611 "data_offset": 0, 00:07:20.611 "data_size": 0 00:07:20.611 }, 00:07:20.611 { 00:07:20.611 "name": "BaseBdev2", 00:07:20.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.611 "is_configured": false, 00:07:20.611 "data_offset": 0, 00:07:20.611 "data_size": 0 00:07:20.611 } 00:07:20.611 ] 00:07:20.611 }' 00:07:20.611 11:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:20.611 11:18:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.177 11:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:21.436 [2024-07-25 11:18:37.134298] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.436 [2024-07-25 11:18:37.134576] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:21.436 11:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:21.695 [2024-07-25 11:18:37.422400] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.695 [2024-07-25 11:18:37.422473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.695 [2024-07-25 11:18:37.422493] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.695 [2024-07-25 11:18:37.422507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.695 11:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:21.954 [2024-07-25 11:18:37.727177] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.954 BaseBdev1 00:07:21.954 11:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:21.954 11:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:21.954 11:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:21.954 11:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:21.954 11:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:21.954 11:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:21.954 11:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:22.212 11:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:22.470 [ 00:07:22.470 { 00:07:22.470 "name": "BaseBdev1", 00:07:22.470 "aliases": [ 00:07:22.470 "face7c0e-117d-4174-a8f0-fc103ba35f0e" 00:07:22.470 ], 00:07:22.470 "product_name": "Malloc disk", 00:07:22.470 "block_size": 512, 00:07:22.470 "num_blocks": 65536, 00:07:22.470 "uuid": "face7c0e-117d-4174-a8f0-fc103ba35f0e", 00:07:22.470 "assigned_rate_limits": { 00:07:22.470 "rw_ios_per_sec": 0, 00:07:22.470 "rw_mbytes_per_sec": 0, 00:07:22.470 "r_mbytes_per_sec": 0, 00:07:22.470 "w_mbytes_per_sec": 0 00:07:22.470 }, 00:07:22.470 "claimed": true, 00:07:22.470 "claim_type": "exclusive_write", 00:07:22.470 "zoned": false, 00:07:22.470 "supported_io_types": { 00:07:22.470 "read": true, 00:07:22.470 "write": true, 00:07:22.470 "unmap": true, 00:07:22.470 "flush": true, 00:07:22.471 "reset": true, 00:07:22.471 "nvme_admin": false, 00:07:22.471 "nvme_io": false, 00:07:22.471 "nvme_io_md": false, 00:07:22.471 "write_zeroes": true, 00:07:22.471 "zcopy": true, 00:07:22.471 "get_zone_info": false, 00:07:22.471 "zone_management": false, 00:07:22.471 "zone_append": false, 00:07:22.471 "compare": false, 00:07:22.471 "compare_and_write": false, 00:07:22.471 "abort": true, 00:07:22.471 "seek_hole": false, 00:07:22.471 "seek_data": false, 00:07:22.471 "copy": true, 00:07:22.471 "nvme_iov_md": false 00:07:22.471 }, 00:07:22.471 "memory_domains": [ 00:07:22.471 { 00:07:22.471 "dma_device_id": "system", 00:07:22.471 "dma_device_type": 1 00:07:22.471 }, 00:07:22.471 { 00:07:22.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.471 "dma_device_type": 2 00:07:22.471 } 00:07:22.471 ], 00:07:22.471 "driver_specific": {} 00:07:22.471 } 00:07:22.471 ] 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.471 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.729 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:22.729 "name": "Existed_Raid", 00:07:22.729 "uuid": "647303f4-a9b6-4493-8d69-7c6a608d53e2", 00:07:22.729 "strip_size_kb": 64, 00:07:22.729 "state": "configuring", 00:07:22.729 "raid_level": "raid0", 00:07:22.729 "superblock": true, 00:07:22.729 "num_base_bdevs": 2, 00:07:22.729 "num_base_bdevs_discovered": 1, 00:07:22.729 "num_base_bdevs_operational": 2, 00:07:22.729 "base_bdevs_list": [ 00:07:22.729 { 00:07:22.729 "name": "BaseBdev1", 00:07:22.729 "uuid": "face7c0e-117d-4174-a8f0-fc103ba35f0e", 00:07:22.729 "is_configured": true, 00:07:22.729 "data_offset": 2048, 00:07:22.729 "data_size": 63488 00:07:22.729 }, 00:07:22.729 { 00:07:22.729 "name": "BaseBdev2", 00:07:22.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.729 "is_configured": false, 00:07:22.729 "data_offset": 0, 00:07:22.729 "data_size": 0 00:07:22.729 } 00:07:22.729 ] 00:07:22.729 }' 00:07:22.729 11:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:22.729 11:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.664 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:23.664 [2024-07-25 11:18:39.511769] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.664 [2024-07-25 11:18:39.511855] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:23.664 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:23.923 [2024-07-25 11:18:39.795893] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.923 [2024-07-25 11:18:39.798295] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.923 [2024-07-25 11:18:39.798344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:24.182 11:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.182 11:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:24.182 "name": "Existed_Raid", 00:07:24.182 "uuid": "435669d1-61d5-4461-855b-f3c24cf74a46", 00:07:24.182 "strip_size_kb": 64, 00:07:24.182 "state": "configuring", 00:07:24.182 "raid_level": "raid0", 00:07:24.182 "superblock": true, 00:07:24.182 "num_base_bdevs": 2, 00:07:24.182 "num_base_bdevs_discovered": 1, 00:07:24.182 "num_base_bdevs_operational": 2, 00:07:24.182 "base_bdevs_list": [ 00:07:24.182 { 00:07:24.182 "name": "BaseBdev1", 00:07:24.182 "uuid": "face7c0e-117d-4174-a8f0-fc103ba35f0e", 00:07:24.182 "is_configured": true, 00:07:24.182 "data_offset": 2048, 00:07:24.182 "data_size": 63488 00:07:24.182 }, 00:07:24.182 { 00:07:24.182 "name": "BaseBdev2", 00:07:24.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.182 "is_configured": false, 00:07:24.182 "data_offset": 0, 00:07:24.183 "data_size": 0 00:07:24.183 } 00:07:24.183 ] 00:07:24.183 }' 00:07:24.183 11:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:24.183 11:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.119 11:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:25.119 [2024-07-25 11:18:40.990231] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.119 [2024-07-25 11:18:40.990522] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:25.119 [2024-07-25 11:18:40.990546] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:25.119 [2024-07-25 11:18:40.990958] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:25.119 [2024-07-25 11:18:40.991163] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:25.119 [2024-07-25 11:18:40.991180] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:25.119 BaseBdev2 00:07:25.119 [2024-07-25 11:18:40.991354] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.378 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:25.378 11:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:25.378 11:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:25.378 11:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:25.378 11:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:25.378 11:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:25.378 11:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:25.637 11:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:25.895 [ 00:07:25.895 { 00:07:25.895 "name": "BaseBdev2", 00:07:25.895 "aliases": [ 00:07:25.895 "12650ffb-0310-43b9-b31b-5fda69bc130b" 00:07:25.895 ], 00:07:25.895 "product_name": "Malloc disk", 00:07:25.895 "block_size": 512, 00:07:25.895 "num_blocks": 65536, 00:07:25.895 "uuid": "12650ffb-0310-43b9-b31b-5fda69bc130b", 00:07:25.895 "assigned_rate_limits": { 00:07:25.895 "rw_ios_per_sec": 0, 00:07:25.895 "rw_mbytes_per_sec": 0, 00:07:25.895 "r_mbytes_per_sec": 0, 00:07:25.895 "w_mbytes_per_sec": 0 00:07:25.895 }, 00:07:25.895 "claimed": true, 00:07:25.895 "claim_type": "exclusive_write", 00:07:25.895 "zoned": false, 00:07:25.895 "supported_io_types": { 00:07:25.895 "read": true, 00:07:25.895 "write": true, 00:07:25.895 "unmap": true, 00:07:25.895 "flush": true, 00:07:25.896 "reset": true, 00:07:25.896 "nvme_admin": false, 00:07:25.896 "nvme_io": false, 00:07:25.896 "nvme_io_md": false, 00:07:25.896 "write_zeroes": true, 00:07:25.896 "zcopy": true, 00:07:25.896 "get_zone_info": false, 00:07:25.896 "zone_management": false, 00:07:25.896 "zone_append": false, 00:07:25.896 "compare": false, 00:07:25.896 "compare_and_write": false, 00:07:25.896 "abort": true, 00:07:25.896 "seek_hole": false, 00:07:25.896 "seek_data": false, 00:07:25.896 "copy": true, 00:07:25.896 "nvme_iov_md": false 00:07:25.896 }, 00:07:25.896 "memory_domains": [ 00:07:25.896 { 00:07:25.896 "dma_device_id": "system", 00:07:25.896 "dma_device_type": 1 00:07:25.896 }, 00:07:25.896 { 00:07:25.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.896 "dma_device_type": 2 00:07:25.896 } 00:07:25.896 ], 00:07:25.896 "driver_specific": {} 00:07:25.896 } 00:07:25.896 ] 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.896 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.154 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:26.154 "name": "Existed_Raid", 00:07:26.154 "uuid": "435669d1-61d5-4461-855b-f3c24cf74a46", 00:07:26.154 "strip_size_kb": 64, 00:07:26.154 "state": "online", 00:07:26.154 "raid_level": "raid0", 00:07:26.154 "superblock": true, 00:07:26.154 "num_base_bdevs": 2, 00:07:26.154 "num_base_bdevs_discovered": 2, 00:07:26.154 "num_base_bdevs_operational": 2, 00:07:26.154 "base_bdevs_list": [ 00:07:26.154 { 00:07:26.154 "name": "BaseBdev1", 00:07:26.154 "uuid": "face7c0e-117d-4174-a8f0-fc103ba35f0e", 00:07:26.154 "is_configured": true, 00:07:26.154 "data_offset": 2048, 00:07:26.154 "data_size": 63488 00:07:26.154 }, 00:07:26.154 { 00:07:26.154 "name": "BaseBdev2", 00:07:26.154 "uuid": "12650ffb-0310-43b9-b31b-5fda69bc130b", 00:07:26.154 "is_configured": true, 00:07:26.154 "data_offset": 2048, 00:07:26.154 "data_size": 63488 00:07:26.154 } 00:07:26.154 ] 00:07:26.154 }' 00:07:26.154 11:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:26.154 11:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.720 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:26.720 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:26.720 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:26.720 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:26.720 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:26.720 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:26.720 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:26.720 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:26.986 [2024-07-25 11:18:42.747160] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.986 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:26.986 "name": "Existed_Raid", 00:07:26.986 "aliases": [ 00:07:26.986 "435669d1-61d5-4461-855b-f3c24cf74a46" 00:07:26.986 ], 00:07:26.986 "product_name": "Raid Volume", 00:07:26.986 "block_size": 512, 00:07:26.986 "num_blocks": 126976, 00:07:26.986 "uuid": "435669d1-61d5-4461-855b-f3c24cf74a46", 00:07:26.986 "assigned_rate_limits": { 00:07:26.986 "rw_ios_per_sec": 0, 00:07:26.986 "rw_mbytes_per_sec": 0, 00:07:26.986 "r_mbytes_per_sec": 0, 00:07:26.986 "w_mbytes_per_sec": 0 00:07:26.986 }, 00:07:26.986 "claimed": false, 00:07:26.986 "zoned": false, 00:07:26.986 "supported_io_types": { 00:07:26.986 "read": true, 00:07:26.986 "write": true, 00:07:26.986 "unmap": true, 00:07:26.986 "flush": true, 00:07:26.986 "reset": true, 00:07:26.986 "nvme_admin": false, 00:07:26.986 "nvme_io": false, 00:07:26.986 "nvme_io_md": false, 00:07:26.986 "write_zeroes": true, 00:07:26.986 "zcopy": false, 00:07:26.986 "get_zone_info": false, 00:07:26.986 "zone_management": false, 00:07:26.986 "zone_append": false, 00:07:26.986 "compare": false, 00:07:26.986 "compare_and_write": false, 00:07:26.986 "abort": false, 00:07:26.986 "seek_hole": false, 00:07:26.986 "seek_data": false, 00:07:26.986 "copy": false, 00:07:26.986 "nvme_iov_md": false 00:07:26.986 }, 00:07:26.986 "memory_domains": [ 00:07:26.986 { 00:07:26.986 "dma_device_id": "system", 00:07:26.986 "dma_device_type": 1 00:07:26.986 }, 00:07:26.986 { 00:07:26.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.986 "dma_device_type": 2 00:07:26.986 }, 00:07:26.986 { 00:07:26.986 "dma_device_id": "system", 00:07:26.986 "dma_device_type": 1 00:07:26.986 }, 00:07:26.986 { 00:07:26.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.986 "dma_device_type": 2 00:07:26.986 } 00:07:26.986 ], 00:07:26.986 "driver_specific": { 00:07:26.986 "raid": { 00:07:26.986 "uuid": "435669d1-61d5-4461-855b-f3c24cf74a46", 00:07:26.986 "strip_size_kb": 64, 00:07:26.986 "state": "online", 00:07:26.986 "raid_level": "raid0", 00:07:26.986 "superblock": true, 00:07:26.986 "num_base_bdevs": 2, 00:07:26.986 "num_base_bdevs_discovered": 2, 00:07:26.986 "num_base_bdevs_operational": 2, 00:07:26.986 "base_bdevs_list": [ 00:07:26.986 { 00:07:26.986 "name": "BaseBdev1", 00:07:26.986 "uuid": "face7c0e-117d-4174-a8f0-fc103ba35f0e", 00:07:26.986 "is_configured": true, 00:07:26.986 "data_offset": 2048, 00:07:26.986 "data_size": 63488 00:07:26.986 }, 00:07:26.986 { 00:07:26.986 "name": "BaseBdev2", 00:07:26.986 "uuid": "12650ffb-0310-43b9-b31b-5fda69bc130b", 00:07:26.986 "is_configured": true, 00:07:26.986 "data_offset": 2048, 00:07:26.986 "data_size": 63488 00:07:26.986 } 00:07:26.986 ] 00:07:26.986 } 00:07:26.986 } 00:07:26.986 }' 00:07:26.986 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.986 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:26.986 BaseBdev2' 00:07:26.986 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:26.986 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:26.986 11:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:27.265 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:27.265 "name": "BaseBdev1", 00:07:27.265 "aliases": [ 00:07:27.265 "face7c0e-117d-4174-a8f0-fc103ba35f0e" 00:07:27.265 ], 00:07:27.265 "product_name": "Malloc disk", 00:07:27.265 "block_size": 512, 00:07:27.265 "num_blocks": 65536, 00:07:27.265 "uuid": "face7c0e-117d-4174-a8f0-fc103ba35f0e", 00:07:27.265 "assigned_rate_limits": { 00:07:27.265 "rw_ios_per_sec": 0, 00:07:27.265 "rw_mbytes_per_sec": 0, 00:07:27.265 "r_mbytes_per_sec": 0, 00:07:27.265 "w_mbytes_per_sec": 0 00:07:27.265 }, 00:07:27.265 "claimed": true, 00:07:27.265 "claim_type": "exclusive_write", 00:07:27.265 "zoned": false, 00:07:27.265 "supported_io_types": { 00:07:27.265 "read": true, 00:07:27.265 "write": true, 00:07:27.265 "unmap": true, 00:07:27.265 "flush": true, 00:07:27.265 "reset": true, 00:07:27.265 "nvme_admin": false, 00:07:27.265 "nvme_io": false, 00:07:27.265 "nvme_io_md": false, 00:07:27.265 "write_zeroes": true, 00:07:27.265 "zcopy": true, 00:07:27.265 "get_zone_info": false, 00:07:27.265 "zone_management": false, 00:07:27.265 "zone_append": false, 00:07:27.265 "compare": false, 00:07:27.265 "compare_and_write": false, 00:07:27.265 "abort": true, 00:07:27.265 "seek_hole": false, 00:07:27.265 "seek_data": false, 00:07:27.265 "copy": true, 00:07:27.265 "nvme_iov_md": false 00:07:27.265 }, 00:07:27.265 "memory_domains": [ 00:07:27.265 { 00:07:27.265 "dma_device_id": "system", 00:07:27.265 "dma_device_type": 1 00:07:27.265 }, 00:07:27.265 { 00:07:27.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.265 "dma_device_type": 2 00:07:27.265 } 00:07:27.265 ], 00:07:27.265 "driver_specific": {} 00:07:27.265 }' 00:07:27.265 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:27.265 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:27.523 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:27.523 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:27.523 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:27.523 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:27.523 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:27.523 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:27.523 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:27.523 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:27.781 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:27.781 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:27.781 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:27.781 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:27.781 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:28.039 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:28.039 "name": "BaseBdev2", 00:07:28.039 "aliases": [ 00:07:28.039 "12650ffb-0310-43b9-b31b-5fda69bc130b" 00:07:28.039 ], 00:07:28.039 "product_name": "Malloc disk", 00:07:28.039 "block_size": 512, 00:07:28.039 "num_blocks": 65536, 00:07:28.039 "uuid": "12650ffb-0310-43b9-b31b-5fda69bc130b", 00:07:28.039 "assigned_rate_limits": { 00:07:28.039 "rw_ios_per_sec": 0, 00:07:28.039 "rw_mbytes_per_sec": 0, 00:07:28.039 "r_mbytes_per_sec": 0, 00:07:28.039 "w_mbytes_per_sec": 0 00:07:28.039 }, 00:07:28.039 "claimed": true, 00:07:28.039 "claim_type": "exclusive_write", 00:07:28.039 "zoned": false, 00:07:28.039 "supported_io_types": { 00:07:28.039 "read": true, 00:07:28.039 "write": true, 00:07:28.039 "unmap": true, 00:07:28.039 "flush": true, 00:07:28.039 "reset": true, 00:07:28.039 "nvme_admin": false, 00:07:28.039 "nvme_io": false, 00:07:28.039 "nvme_io_md": false, 00:07:28.039 "write_zeroes": true, 00:07:28.039 "zcopy": true, 00:07:28.039 "get_zone_info": false, 00:07:28.039 "zone_management": false, 00:07:28.039 "zone_append": false, 00:07:28.039 "compare": false, 00:07:28.039 "compare_and_write": false, 00:07:28.039 "abort": true, 00:07:28.039 "seek_hole": false, 00:07:28.039 "seek_data": false, 00:07:28.039 "copy": true, 00:07:28.039 "nvme_iov_md": false 00:07:28.039 }, 00:07:28.039 "memory_domains": [ 00:07:28.039 { 00:07:28.039 "dma_device_id": "system", 00:07:28.039 "dma_device_type": 1 00:07:28.039 }, 00:07:28.039 { 00:07:28.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.039 "dma_device_type": 2 00:07:28.039 } 00:07:28.039 ], 00:07:28.039 "driver_specific": {} 00:07:28.039 }' 00:07:28.039 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:28.039 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:28.039 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:28.039 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:28.039 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:28.297 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:28.297 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:28.297 11:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:28.297 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:28.297 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:28.297 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:28.297 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:28.297 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:28.556 [2024-07-25 11:18:44.395356] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:28.556 [2024-07-25 11:18:44.395403] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.556 [2024-07-25 11:18:44.395493] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.814 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.073 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:29.073 "name": "Existed_Raid", 00:07:29.073 "uuid": "435669d1-61d5-4461-855b-f3c24cf74a46", 00:07:29.073 "strip_size_kb": 64, 00:07:29.073 "state": "offline", 00:07:29.073 "raid_level": "raid0", 00:07:29.073 "superblock": true, 00:07:29.073 "num_base_bdevs": 2, 00:07:29.073 "num_base_bdevs_discovered": 1, 00:07:29.073 "num_base_bdevs_operational": 1, 00:07:29.073 "base_bdevs_list": [ 00:07:29.073 { 00:07:29.073 "name": null, 00:07:29.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.073 "is_configured": false, 00:07:29.073 "data_offset": 2048, 00:07:29.073 "data_size": 63488 00:07:29.073 }, 00:07:29.073 { 00:07:29.073 "name": "BaseBdev2", 00:07:29.073 "uuid": "12650ffb-0310-43b9-b31b-5fda69bc130b", 00:07:29.073 "is_configured": true, 00:07:29.073 "data_offset": 2048, 00:07:29.073 "data_size": 63488 00:07:29.073 } 00:07:29.073 ] 00:07:29.073 }' 00:07:29.073 11:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:29.073 11:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.640 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:29.640 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:29.640 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.640 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:29.899 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:29.899 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:29.899 11:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:30.157 [2024-07-25 11:18:45.977995] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.157 [2024-07-25 11:18:45.978074] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:30.416 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:30.416 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:30.416 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.416 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 62987 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62987 ']' 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62987 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62987 00:07:30.674 killing process with pid 62987 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62987' 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62987 00:07:30.674 [2024-07-25 11:18:46.356534] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.674 11:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62987 00:07:30.674 [2024-07-25 11:18:46.371714] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.051 11:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:32.051 00:07:32.051 real 0m12.933s 00:07:32.051 user 0m22.500s 00:07:32.051 sys 0m1.729s 00:07:32.051 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.051 11:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.051 ************************************ 00:07:32.051 END TEST raid_state_function_test_sb 00:07:32.051 ************************************ 00:07:32.051 11:18:47 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:32.051 11:18:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:32.051 11:18:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.051 11:18:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.051 ************************************ 00:07:32.051 START TEST raid_superblock_test 00:07:32.051 ************************************ 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=63365 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 63365 /var/tmp/spdk-raid.sock 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63365 ']' 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.051 11:18:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:32.051 [2024-07-25 11:18:47.706820] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:32.051 [2024-07-25 11:18:47.708167] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63365 ] 00:07:32.051 [2024-07-25 11:18:47.886365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.309 [2024-07-25 11:18:48.121245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.568 [2024-07-25 11:18:48.327291] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.568 [2024-07-25 11:18:48.327376] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.827 11:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:33.086 malloc1 00:07:33.086 11:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:33.345 [2024-07-25 11:18:49.138604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:33.345 [2024-07-25 11:18:49.138786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.345 [2024-07-25 11:18:49.138820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:33.345 [2024-07-25 11:18:49.138841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.345 [2024-07-25 11:18:49.142020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.345 [2024-07-25 11:18:49.142086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:33.345 pt1 00:07:33.345 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:07:33.345 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:33.345 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:07:33.345 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:07:33.345 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:33.345 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:33.345 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:07:33.345 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:33.345 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:33.604 malloc2 00:07:33.604 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:33.863 [2024-07-25 11:18:49.670059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:33.863 [2024-07-25 11:18:49.670162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.863 [2024-07-25 11:18:49.670198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:33.863 [2024-07-25 11:18:49.670221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.863 [2024-07-25 11:18:49.673064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.863 [2024-07-25 11:18:49.673109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:33.863 pt2 00:07:33.863 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:07:33.863 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:07:33.863 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:34.121 [2024-07-25 11:18:49.906201] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:34.121 [2024-07-25 11:18:49.908607] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:34.121 [2024-07-25 11:18:49.908862] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:34.121 [2024-07-25 11:18:49.908891] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:34.121 [2024-07-25 11:18:49.909300] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:34.121 [2024-07-25 11:18:49.909541] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:34.121 [2024-07-25 11:18:49.909571] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:34.121 [2024-07-25 11:18:49.909803] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:34.121 11:18:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.379 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:34.379 "name": "raid_bdev1", 00:07:34.379 "uuid": "ede6f1e3-8bcb-4642-929b-3083d0ddb7a7", 00:07:34.379 "strip_size_kb": 64, 00:07:34.379 "state": "online", 00:07:34.379 "raid_level": "raid0", 00:07:34.379 "superblock": true, 00:07:34.379 "num_base_bdevs": 2, 00:07:34.379 "num_base_bdevs_discovered": 2, 00:07:34.379 "num_base_bdevs_operational": 2, 00:07:34.379 "base_bdevs_list": [ 00:07:34.379 { 00:07:34.379 "name": "pt1", 00:07:34.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.379 "is_configured": true, 00:07:34.379 "data_offset": 2048, 00:07:34.379 "data_size": 63488 00:07:34.379 }, 00:07:34.379 { 00:07:34.379 "name": "pt2", 00:07:34.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.379 "is_configured": true, 00:07:34.379 "data_offset": 2048, 00:07:34.379 "data_size": 63488 00:07:34.379 } 00:07:34.379 ] 00:07:34.379 }' 00:07:34.379 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:34.379 11:18:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.313 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:07:35.313 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:35.313 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:35.313 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:35.313 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:35.313 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:35.313 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:35.313 11:18:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:35.313 [2024-07-25 11:18:51.154817] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.313 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:35.313 "name": "raid_bdev1", 00:07:35.313 "aliases": [ 00:07:35.313 "ede6f1e3-8bcb-4642-929b-3083d0ddb7a7" 00:07:35.313 ], 00:07:35.313 "product_name": "Raid Volume", 00:07:35.313 "block_size": 512, 00:07:35.313 "num_blocks": 126976, 00:07:35.313 "uuid": "ede6f1e3-8bcb-4642-929b-3083d0ddb7a7", 00:07:35.313 "assigned_rate_limits": { 00:07:35.313 "rw_ios_per_sec": 0, 00:07:35.313 "rw_mbytes_per_sec": 0, 00:07:35.313 "r_mbytes_per_sec": 0, 00:07:35.313 "w_mbytes_per_sec": 0 00:07:35.313 }, 00:07:35.313 "claimed": false, 00:07:35.313 "zoned": false, 00:07:35.313 "supported_io_types": { 00:07:35.314 "read": true, 00:07:35.314 "write": true, 00:07:35.314 "unmap": true, 00:07:35.314 "flush": true, 00:07:35.314 "reset": true, 00:07:35.314 "nvme_admin": false, 00:07:35.314 "nvme_io": false, 00:07:35.314 "nvme_io_md": false, 00:07:35.314 "write_zeroes": true, 00:07:35.314 "zcopy": false, 00:07:35.314 "get_zone_info": false, 00:07:35.314 "zone_management": false, 00:07:35.314 "zone_append": false, 00:07:35.314 "compare": false, 00:07:35.314 "compare_and_write": false, 00:07:35.314 "abort": false, 00:07:35.314 "seek_hole": false, 00:07:35.314 "seek_data": false, 00:07:35.314 "copy": false, 00:07:35.314 "nvme_iov_md": false 00:07:35.314 }, 00:07:35.314 "memory_domains": [ 00:07:35.314 { 00:07:35.314 "dma_device_id": "system", 00:07:35.314 "dma_device_type": 1 00:07:35.314 }, 00:07:35.314 { 00:07:35.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.314 "dma_device_type": 2 00:07:35.314 }, 00:07:35.314 { 00:07:35.314 "dma_device_id": "system", 00:07:35.314 "dma_device_type": 1 00:07:35.314 }, 00:07:35.314 { 00:07:35.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.314 "dma_device_type": 2 00:07:35.314 } 00:07:35.314 ], 00:07:35.314 "driver_specific": { 00:07:35.314 "raid": { 00:07:35.314 "uuid": "ede6f1e3-8bcb-4642-929b-3083d0ddb7a7", 00:07:35.314 "strip_size_kb": 64, 00:07:35.314 "state": "online", 00:07:35.314 "raid_level": "raid0", 00:07:35.314 "superblock": true, 00:07:35.314 "num_base_bdevs": 2, 00:07:35.314 "num_base_bdevs_discovered": 2, 00:07:35.314 "num_base_bdevs_operational": 2, 00:07:35.314 "base_bdevs_list": [ 00:07:35.314 { 00:07:35.314 "name": "pt1", 00:07:35.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:35.314 "is_configured": true, 00:07:35.314 "data_offset": 2048, 00:07:35.314 "data_size": 63488 00:07:35.314 }, 00:07:35.314 { 00:07:35.314 "name": "pt2", 00:07:35.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:35.314 "is_configured": true, 00:07:35.314 "data_offset": 2048, 00:07:35.314 "data_size": 63488 00:07:35.314 } 00:07:35.314 ] 00:07:35.314 } 00:07:35.314 } 00:07:35.314 }' 00:07:35.314 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:35.572 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:35.572 pt2' 00:07:35.572 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:35.572 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:35.572 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:35.831 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:35.831 "name": "pt1", 00:07:35.831 "aliases": [ 00:07:35.831 "00000000-0000-0000-0000-000000000001" 00:07:35.831 ], 00:07:35.831 "product_name": "passthru", 00:07:35.831 "block_size": 512, 00:07:35.831 "num_blocks": 65536, 00:07:35.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:35.831 "assigned_rate_limits": { 00:07:35.831 "rw_ios_per_sec": 0, 00:07:35.831 "rw_mbytes_per_sec": 0, 00:07:35.831 "r_mbytes_per_sec": 0, 00:07:35.831 "w_mbytes_per_sec": 0 00:07:35.831 }, 00:07:35.831 "claimed": true, 00:07:35.831 "claim_type": "exclusive_write", 00:07:35.831 "zoned": false, 00:07:35.831 "supported_io_types": { 00:07:35.831 "read": true, 00:07:35.831 "write": true, 00:07:35.831 "unmap": true, 00:07:35.831 "flush": true, 00:07:35.831 "reset": true, 00:07:35.831 "nvme_admin": false, 00:07:35.831 "nvme_io": false, 00:07:35.831 "nvme_io_md": false, 00:07:35.831 "write_zeroes": true, 00:07:35.831 "zcopy": true, 00:07:35.831 "get_zone_info": false, 00:07:35.831 "zone_management": false, 00:07:35.831 "zone_append": false, 00:07:35.831 "compare": false, 00:07:35.831 "compare_and_write": false, 00:07:35.831 "abort": true, 00:07:35.831 "seek_hole": false, 00:07:35.831 "seek_data": false, 00:07:35.831 "copy": true, 00:07:35.831 "nvme_iov_md": false 00:07:35.831 }, 00:07:35.831 "memory_domains": [ 00:07:35.831 { 00:07:35.831 "dma_device_id": "system", 00:07:35.831 "dma_device_type": 1 00:07:35.831 }, 00:07:35.831 { 00:07:35.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.831 "dma_device_type": 2 00:07:35.831 } 00:07:35.831 ], 00:07:35.831 "driver_specific": { 00:07:35.831 "passthru": { 00:07:35.831 "name": "pt1", 00:07:35.831 "base_bdev_name": "malloc1" 00:07:35.831 } 00:07:35.831 } 00:07:35.831 }' 00:07:35.831 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:35.831 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:35.831 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:35.831 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:35.831 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:35.831 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:35.831 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:36.092 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:36.092 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:36.092 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:36.092 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:36.092 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:36.092 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:36.092 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:36.092 11:18:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:36.349 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:36.350 "name": "pt2", 00:07:36.350 "aliases": [ 00:07:36.350 "00000000-0000-0000-0000-000000000002" 00:07:36.350 ], 00:07:36.350 "product_name": "passthru", 00:07:36.350 "block_size": 512, 00:07:36.350 "num_blocks": 65536, 00:07:36.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.350 "assigned_rate_limits": { 00:07:36.350 "rw_ios_per_sec": 0, 00:07:36.350 "rw_mbytes_per_sec": 0, 00:07:36.350 "r_mbytes_per_sec": 0, 00:07:36.350 "w_mbytes_per_sec": 0 00:07:36.350 }, 00:07:36.350 "claimed": true, 00:07:36.350 "claim_type": "exclusive_write", 00:07:36.350 "zoned": false, 00:07:36.350 "supported_io_types": { 00:07:36.350 "read": true, 00:07:36.350 "write": true, 00:07:36.350 "unmap": true, 00:07:36.350 "flush": true, 00:07:36.350 "reset": true, 00:07:36.350 "nvme_admin": false, 00:07:36.350 "nvme_io": false, 00:07:36.350 "nvme_io_md": false, 00:07:36.350 "write_zeroes": true, 00:07:36.350 "zcopy": true, 00:07:36.350 "get_zone_info": false, 00:07:36.350 "zone_management": false, 00:07:36.350 "zone_append": false, 00:07:36.350 "compare": false, 00:07:36.350 "compare_and_write": false, 00:07:36.350 "abort": true, 00:07:36.350 "seek_hole": false, 00:07:36.350 "seek_data": false, 00:07:36.350 "copy": true, 00:07:36.350 "nvme_iov_md": false 00:07:36.350 }, 00:07:36.350 "memory_domains": [ 00:07:36.350 { 00:07:36.350 "dma_device_id": "system", 00:07:36.350 "dma_device_type": 1 00:07:36.350 }, 00:07:36.350 { 00:07:36.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.350 "dma_device_type": 2 00:07:36.350 } 00:07:36.350 ], 00:07:36.350 "driver_specific": { 00:07:36.350 "passthru": { 00:07:36.350 "name": "pt2", 00:07:36.350 "base_bdev_name": "malloc2" 00:07:36.350 } 00:07:36.350 } 00:07:36.350 }' 00:07:36.350 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:36.608 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:36.608 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:36.608 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:36.608 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:36.608 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:36.608 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:36.608 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:36.608 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:36.608 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:36.867 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:36.867 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:36.867 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:07:36.867 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:37.127 [2024-07-25 11:18:52.783310] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.127 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=ede6f1e3-8bcb-4642-929b-3083d0ddb7a7 00:07:37.127 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z ede6f1e3-8bcb-4642-929b-3083d0ddb7a7 ']' 00:07:37.127 11:18:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:37.385 [2024-07-25 11:18:53.062958] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.385 [2024-07-25 11:18:53.063008] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.385 [2024-07-25 11:18:53.063121] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.385 [2024-07-25 11:18:53.063225] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.385 [2024-07-25 11:18:53.063242] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:37.385 11:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.385 11:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:07:37.646 11:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:07:37.646 11:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:07:37.646 11:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:07:37.646 11:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:37.914 11:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:07:37.914 11:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:38.172 11:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:38.172 11:18:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:38.429 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:38.429 [2024-07-25 11:18:54.267424] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:38.429 [2024-07-25 11:18:54.270269] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:38.429 [2024-07-25 11:18:54.270376] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:38.429 [2024-07-25 11:18:54.270497] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:38.429 [2024-07-25 11:18:54.270529] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:38.430 [2024-07-25 11:18:54.270544] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:38.430 request: 00:07:38.430 { 00:07:38.430 "name": "raid_bdev1", 00:07:38.430 "raid_level": "raid0", 00:07:38.430 "base_bdevs": [ 00:07:38.430 "malloc1", 00:07:38.430 "malloc2" 00:07:38.430 ], 00:07:38.430 "strip_size_kb": 64, 00:07:38.430 "superblock": false, 00:07:38.430 "method": "bdev_raid_create", 00:07:38.430 "req_id": 1 00:07:38.430 } 00:07:38.430 Got JSON-RPC error response 00:07:38.430 response: 00:07:38.430 { 00:07:38.430 "code": -17, 00:07:38.430 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:38.430 } 00:07:38.430 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:38.430 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.430 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.430 11:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.430 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:38.430 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:38.995 [2024-07-25 11:18:54.787586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:38.995 [2024-07-25 11:18:54.787736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.995 [2024-07-25 11:18:54.787801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:38.995 [2024-07-25 11:18:54.787835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.995 [2024-07-25 11:18:54.791801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.995 [2024-07-25 11:18:54.791872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:38.995 [2024-07-25 11:18:54.792084] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:38.995 [2024-07-25 11:18:54.792206] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:38.995 pt1 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.995 11:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.254 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:39.254 "name": "raid_bdev1", 00:07:39.254 "uuid": "ede6f1e3-8bcb-4642-929b-3083d0ddb7a7", 00:07:39.254 "strip_size_kb": 64, 00:07:39.254 "state": "configuring", 00:07:39.254 "raid_level": "raid0", 00:07:39.254 "superblock": true, 00:07:39.254 "num_base_bdevs": 2, 00:07:39.254 "num_base_bdevs_discovered": 1, 00:07:39.254 "num_base_bdevs_operational": 2, 00:07:39.254 "base_bdevs_list": [ 00:07:39.254 { 00:07:39.254 "name": "pt1", 00:07:39.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.254 "is_configured": true, 00:07:39.254 "data_offset": 2048, 00:07:39.254 "data_size": 63488 00:07:39.254 }, 00:07:39.254 { 00:07:39.254 "name": null, 00:07:39.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.254 "is_configured": false, 00:07:39.254 "data_offset": 2048, 00:07:39.254 "data_size": 63488 00:07:39.254 } 00:07:39.254 ] 00:07:39.254 }' 00:07:39.254 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:39.254 11:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.820 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:07:39.820 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:07:39.820 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:07:39.820 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:40.079 [2024-07-25 11:18:55.880468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:40.079 [2024-07-25 11:18:55.880903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.079 [2024-07-25 11:18:55.881000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:40.079 [2024-07-25 11:18:55.881027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.079 [2024-07-25 11:18:55.881715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.079 [2024-07-25 11:18:55.881756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:40.079 [2024-07-25 11:18:55.881880] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:40.079 [2024-07-25 11:18:55.881928] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:40.079 [2024-07-25 11:18:55.882117] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:40.079 [2024-07-25 11:18:55.882135] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.079 [2024-07-25 11:18:55.882443] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:40.079 [2024-07-25 11:18:55.882674] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:40.079 [2024-07-25 11:18:55.882701] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:40.079 [2024-07-25 11:18:55.882870] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.079 pt2 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:40.079 11:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.337 11:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:40.337 "name": "raid_bdev1", 00:07:40.338 "uuid": "ede6f1e3-8bcb-4642-929b-3083d0ddb7a7", 00:07:40.338 "strip_size_kb": 64, 00:07:40.338 "state": "online", 00:07:40.338 "raid_level": "raid0", 00:07:40.338 "superblock": true, 00:07:40.338 "num_base_bdevs": 2, 00:07:40.338 "num_base_bdevs_discovered": 2, 00:07:40.338 "num_base_bdevs_operational": 2, 00:07:40.338 "base_bdevs_list": [ 00:07:40.338 { 00:07:40.338 "name": "pt1", 00:07:40.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.338 "is_configured": true, 00:07:40.338 "data_offset": 2048, 00:07:40.338 "data_size": 63488 00:07:40.338 }, 00:07:40.338 { 00:07:40.338 "name": "pt2", 00:07:40.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.338 "is_configured": true, 00:07:40.338 "data_offset": 2048, 00:07:40.338 "data_size": 63488 00:07:40.338 } 00:07:40.338 ] 00:07:40.338 }' 00:07:40.338 11:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:40.338 11:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.272 11:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:07:41.272 11:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:41.272 11:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:41.272 11:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:41.272 11:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:41.272 11:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:41.272 11:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:41.272 11:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:41.272 [2024-07-25 11:18:57.139691] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.530 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:41.531 "name": "raid_bdev1", 00:07:41.531 "aliases": [ 00:07:41.531 "ede6f1e3-8bcb-4642-929b-3083d0ddb7a7" 00:07:41.531 ], 00:07:41.531 "product_name": "Raid Volume", 00:07:41.531 "block_size": 512, 00:07:41.531 "num_blocks": 126976, 00:07:41.531 "uuid": "ede6f1e3-8bcb-4642-929b-3083d0ddb7a7", 00:07:41.531 "assigned_rate_limits": { 00:07:41.531 "rw_ios_per_sec": 0, 00:07:41.531 "rw_mbytes_per_sec": 0, 00:07:41.531 "r_mbytes_per_sec": 0, 00:07:41.531 "w_mbytes_per_sec": 0 00:07:41.531 }, 00:07:41.531 "claimed": false, 00:07:41.531 "zoned": false, 00:07:41.531 "supported_io_types": { 00:07:41.531 "read": true, 00:07:41.531 "write": true, 00:07:41.531 "unmap": true, 00:07:41.531 "flush": true, 00:07:41.531 "reset": true, 00:07:41.531 "nvme_admin": false, 00:07:41.531 "nvme_io": false, 00:07:41.531 "nvme_io_md": false, 00:07:41.531 "write_zeroes": true, 00:07:41.531 "zcopy": false, 00:07:41.531 "get_zone_info": false, 00:07:41.531 "zone_management": false, 00:07:41.531 "zone_append": false, 00:07:41.531 "compare": false, 00:07:41.531 "compare_and_write": false, 00:07:41.531 "abort": false, 00:07:41.531 "seek_hole": false, 00:07:41.531 "seek_data": false, 00:07:41.531 "copy": false, 00:07:41.531 "nvme_iov_md": false 00:07:41.531 }, 00:07:41.531 "memory_domains": [ 00:07:41.531 { 00:07:41.531 "dma_device_id": "system", 00:07:41.531 "dma_device_type": 1 00:07:41.531 }, 00:07:41.531 { 00:07:41.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.531 "dma_device_type": 2 00:07:41.531 }, 00:07:41.531 { 00:07:41.531 "dma_device_id": "system", 00:07:41.531 "dma_device_type": 1 00:07:41.531 }, 00:07:41.531 { 00:07:41.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.531 "dma_device_type": 2 00:07:41.531 } 00:07:41.531 ], 00:07:41.531 "driver_specific": { 00:07:41.531 "raid": { 00:07:41.531 "uuid": "ede6f1e3-8bcb-4642-929b-3083d0ddb7a7", 00:07:41.531 "strip_size_kb": 64, 00:07:41.531 "state": "online", 00:07:41.531 "raid_level": "raid0", 00:07:41.531 "superblock": true, 00:07:41.531 "num_base_bdevs": 2, 00:07:41.531 "num_base_bdevs_discovered": 2, 00:07:41.531 "num_base_bdevs_operational": 2, 00:07:41.531 "base_bdevs_list": [ 00:07:41.531 { 00:07:41.531 "name": "pt1", 00:07:41.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.531 "is_configured": true, 00:07:41.531 "data_offset": 2048, 00:07:41.531 "data_size": 63488 00:07:41.531 }, 00:07:41.531 { 00:07:41.531 "name": "pt2", 00:07:41.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.531 "is_configured": true, 00:07:41.531 "data_offset": 2048, 00:07:41.531 "data_size": 63488 00:07:41.531 } 00:07:41.531 ] 00:07:41.531 } 00:07:41.531 } 00:07:41.531 }' 00:07:41.531 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:41.531 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:41.531 pt2' 00:07:41.531 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:41.531 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:41.531 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:41.790 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:41.790 "name": "pt1", 00:07:41.790 "aliases": [ 00:07:41.790 "00000000-0000-0000-0000-000000000001" 00:07:41.790 ], 00:07:41.790 "product_name": "passthru", 00:07:41.790 "block_size": 512, 00:07:41.790 "num_blocks": 65536, 00:07:41.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.790 "assigned_rate_limits": { 00:07:41.790 "rw_ios_per_sec": 0, 00:07:41.790 "rw_mbytes_per_sec": 0, 00:07:41.790 "r_mbytes_per_sec": 0, 00:07:41.790 "w_mbytes_per_sec": 0 00:07:41.790 }, 00:07:41.790 "claimed": true, 00:07:41.790 "claim_type": "exclusive_write", 00:07:41.790 "zoned": false, 00:07:41.790 "supported_io_types": { 00:07:41.790 "read": true, 00:07:41.790 "write": true, 00:07:41.790 "unmap": true, 00:07:41.790 "flush": true, 00:07:41.790 "reset": true, 00:07:41.790 "nvme_admin": false, 00:07:41.790 "nvme_io": false, 00:07:41.790 "nvme_io_md": false, 00:07:41.790 "write_zeroes": true, 00:07:41.790 "zcopy": true, 00:07:41.790 "get_zone_info": false, 00:07:41.790 "zone_management": false, 00:07:41.790 "zone_append": false, 00:07:41.790 "compare": false, 00:07:41.790 "compare_and_write": false, 00:07:41.790 "abort": true, 00:07:41.790 "seek_hole": false, 00:07:41.790 "seek_data": false, 00:07:41.790 "copy": true, 00:07:41.790 "nvme_iov_md": false 00:07:41.790 }, 00:07:41.790 "memory_domains": [ 00:07:41.790 { 00:07:41.790 "dma_device_id": "system", 00:07:41.790 "dma_device_type": 1 00:07:41.790 }, 00:07:41.790 { 00:07:41.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.790 "dma_device_type": 2 00:07:41.790 } 00:07:41.790 ], 00:07:41.790 "driver_specific": { 00:07:41.790 "passthru": { 00:07:41.790 "name": "pt1", 00:07:41.790 "base_bdev_name": "malloc1" 00:07:41.790 } 00:07:41.790 } 00:07:41.790 }' 00:07:41.790 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:41.790 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:41.790 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:41.790 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:41.790 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:42.048 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:42.048 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:42.048 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:42.049 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:42.049 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:42.049 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:42.049 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:42.049 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:42.049 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:42.049 11:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:42.616 "name": "pt2", 00:07:42.616 "aliases": [ 00:07:42.616 "00000000-0000-0000-0000-000000000002" 00:07:42.616 ], 00:07:42.616 "product_name": "passthru", 00:07:42.616 "block_size": 512, 00:07:42.616 "num_blocks": 65536, 00:07:42.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:42.616 "assigned_rate_limits": { 00:07:42.616 "rw_ios_per_sec": 0, 00:07:42.616 "rw_mbytes_per_sec": 0, 00:07:42.616 "r_mbytes_per_sec": 0, 00:07:42.616 "w_mbytes_per_sec": 0 00:07:42.616 }, 00:07:42.616 "claimed": true, 00:07:42.616 "claim_type": "exclusive_write", 00:07:42.616 "zoned": false, 00:07:42.616 "supported_io_types": { 00:07:42.616 "read": true, 00:07:42.616 "write": true, 00:07:42.616 "unmap": true, 00:07:42.616 "flush": true, 00:07:42.616 "reset": true, 00:07:42.616 "nvme_admin": false, 00:07:42.616 "nvme_io": false, 00:07:42.616 "nvme_io_md": false, 00:07:42.616 "write_zeroes": true, 00:07:42.616 "zcopy": true, 00:07:42.616 "get_zone_info": false, 00:07:42.616 "zone_management": false, 00:07:42.616 "zone_append": false, 00:07:42.616 "compare": false, 00:07:42.616 "compare_and_write": false, 00:07:42.616 "abort": true, 00:07:42.616 "seek_hole": false, 00:07:42.616 "seek_data": false, 00:07:42.616 "copy": true, 00:07:42.616 "nvme_iov_md": false 00:07:42.616 }, 00:07:42.616 "memory_domains": [ 00:07:42.616 { 00:07:42.616 "dma_device_id": "system", 00:07:42.616 "dma_device_type": 1 00:07:42.616 }, 00:07:42.616 { 00:07:42.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.616 "dma_device_type": 2 00:07:42.616 } 00:07:42.616 ], 00:07:42.616 "driver_specific": { 00:07:42.616 "passthru": { 00:07:42.616 "name": "pt2", 00:07:42.616 "base_bdev_name": "malloc2" 00:07:42.616 } 00:07:42.616 } 00:07:42.616 }' 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:42.616 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:42.874 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:42.874 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:42.874 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:42.874 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:07:43.134 [2024-07-25 11:18:58.815727] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' ede6f1e3-8bcb-4642-929b-3083d0ddb7a7 '!=' ede6f1e3-8bcb-4642-929b-3083d0ddb7a7 ']' 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 63365 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63365 ']' 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63365 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63365 00:07:43.134 killing process with pid 63365 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63365' 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63365 00:07:43.134 [2024-07-25 11:18:58.869680] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.134 [2024-07-25 11:18:58.869796] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.134 [2024-07-25 11:18:58.869873] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.134 [2024-07-25 11:18:58.869889] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:43.134 11:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63365 00:07:43.392 [2024-07-25 11:18:59.052463] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.767 11:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:07:44.767 00:07:44.767 real 0m12.618s 00:07:44.767 user 0m21.914s 00:07:44.767 sys 0m1.696s 00:07:44.767 11:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.767 11:19:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.767 ************************************ 00:07:44.767 END TEST raid_superblock_test 00:07:44.767 ************************************ 00:07:44.767 11:19:00 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:44.767 11:19:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:44.767 11:19:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.767 11:19:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.767 ************************************ 00:07:44.767 START TEST raid_read_error_test 00:07:44.767 ************************************ 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.H3XT8aa50P 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=63731 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 63731 /var/tmp/spdk-raid.sock 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63731 ']' 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.767 11:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.767 [2024-07-25 11:19:00.376256] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:44.768 [2024-07-25 11:19:00.376417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63731 ] 00:07:44.768 [2024-07-25 11:19:00.546778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.026 [2024-07-25 11:19:00.820144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.284 [2024-07-25 11:19:01.022329] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.284 [2024-07-25 11:19:01.022412] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.542 11:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.542 11:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:45.542 11:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:45.542 11:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:45.799 BaseBdev1_malloc 00:07:45.799 11:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:46.056 true 00:07:46.056 11:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:46.312 [2024-07-25 11:19:02.171727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:46.312 [2024-07-25 11:19:02.171842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.312 [2024-07-25 11:19:02.171882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:46.312 [2024-07-25 11:19:02.171902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.312 [2024-07-25 11:19:02.174992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.313 [2024-07-25 11:19:02.175072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:46.313 BaseBdev1 00:07:46.313 11:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:46.313 11:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:46.877 BaseBdev2_malloc 00:07:46.878 11:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:46.878 true 00:07:46.878 11:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:47.135 [2024-07-25 11:19:02.926549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:47.135 [2024-07-25 11:19:02.926689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.135 [2024-07-25 11:19:02.926737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:47.135 [2024-07-25 11:19:02.926774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.135 [2024-07-25 11:19:02.929695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.135 [2024-07-25 11:19:02.929741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:47.135 BaseBdev2 00:07:47.135 11:19:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:47.394 [2024-07-25 11:19:03.158824] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.394 [2024-07-25 11:19:03.161660] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.394 [2024-07-25 11:19:03.162117] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.394 [2024-07-25 11:19:03.162279] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:47.394 [2024-07-25 11:19:03.162728] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:47.394 [2024-07-25 11:19:03.163137] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.394 [2024-07-25 11:19:03.163312] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:47.394 [2024-07-25 11:19:03.163895] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:47.394 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.669 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:47.669 "name": "raid_bdev1", 00:07:47.669 "uuid": "81047d6f-75f4-4686-9287-4549cff640bf", 00:07:47.669 "strip_size_kb": 64, 00:07:47.669 "state": "online", 00:07:47.669 "raid_level": "raid0", 00:07:47.669 "superblock": true, 00:07:47.669 "num_base_bdevs": 2, 00:07:47.669 "num_base_bdevs_discovered": 2, 00:07:47.669 "num_base_bdevs_operational": 2, 00:07:47.669 "base_bdevs_list": [ 00:07:47.669 { 00:07:47.669 "name": "BaseBdev1", 00:07:47.669 "uuid": "0a2d53fa-6b75-5e83-a8d4-4496dd90276e", 00:07:47.669 "is_configured": true, 00:07:47.669 "data_offset": 2048, 00:07:47.669 "data_size": 63488 00:07:47.669 }, 00:07:47.669 { 00:07:47.669 "name": "BaseBdev2", 00:07:47.669 "uuid": "06777876-dd05-5684-9d2b-f58c45744968", 00:07:47.669 "is_configured": true, 00:07:47.669 "data_offset": 2048, 00:07:47.669 "data_size": 63488 00:07:47.669 } 00:07:47.669 ] 00:07:47.669 }' 00:07:47.669 11:19:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:47.669 11:19:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.600 11:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:07:48.600 11:19:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:48.600 [2024-07-25 11:19:04.241524] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.534 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:50.100 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:50.100 "name": "raid_bdev1", 00:07:50.100 "uuid": "81047d6f-75f4-4686-9287-4549cff640bf", 00:07:50.100 "strip_size_kb": 64, 00:07:50.100 "state": "online", 00:07:50.100 "raid_level": "raid0", 00:07:50.100 "superblock": true, 00:07:50.100 "num_base_bdevs": 2, 00:07:50.100 "num_base_bdevs_discovered": 2, 00:07:50.100 "num_base_bdevs_operational": 2, 00:07:50.100 "base_bdevs_list": [ 00:07:50.100 { 00:07:50.100 "name": "BaseBdev1", 00:07:50.100 "uuid": "0a2d53fa-6b75-5e83-a8d4-4496dd90276e", 00:07:50.100 "is_configured": true, 00:07:50.100 "data_offset": 2048, 00:07:50.100 "data_size": 63488 00:07:50.100 }, 00:07:50.100 { 00:07:50.100 "name": "BaseBdev2", 00:07:50.100 "uuid": "06777876-dd05-5684-9d2b-f58c45744968", 00:07:50.100 "is_configured": true, 00:07:50.100 "data_offset": 2048, 00:07:50.100 "data_size": 63488 00:07:50.100 } 00:07:50.100 ] 00:07:50.100 }' 00:07:50.100 11:19:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:50.101 11:19:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.669 11:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:50.928 [2024-07-25 11:19:06.592665] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.928 [2024-07-25 11:19:06.592946] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.928 [2024-07-25 11:19:06.596385] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.928 [2024-07-25 11:19:06.596609] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.928 [2024-07-25 11:19:06.596822] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr0 00:07:50.928 ee all in destruct 00:07:50.928 [2024-07-25 11:19:06.596969] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:50.928 11:19:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 63731 00:07:50.928 11:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63731 ']' 00:07:50.928 11:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63731 00:07:50.928 11:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:50.928 11:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.928 11:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63731 00:07:50.929 killing process with pid 63731 00:07:50.929 11:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.929 11:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.929 11:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63731' 00:07:50.929 11:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63731 00:07:50.929 [2024-07-25 11:19:06.640881] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.929 11:19:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63731 00:07:50.929 [2024-07-25 11:19:06.763737] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.304 11:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.H3XT8aa50P 00:07:52.304 11:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:07:52.304 11:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:07:52.304 11:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.43 00:07:52.304 11:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:07:52.304 ************************************ 00:07:52.304 END TEST raid_read_error_test 00:07:52.304 ************************************ 00:07:52.304 11:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:52.304 11:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:52.304 11:19:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.43 != \0\.\0\0 ]] 00:07:52.304 00:07:52.304 real 0m7.772s 00:07:52.304 user 0m11.636s 00:07:52.304 sys 0m0.941s 00:07:52.304 11:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.304 11:19:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.304 11:19:08 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:52.304 11:19:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:52.304 11:19:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.304 11:19:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.304 ************************************ 00:07:52.304 START TEST raid_write_error_test 00:07:52.304 ************************************ 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.DL45tNwXjQ 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=63923 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 63923 /var/tmp/spdk-raid.sock 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63923 ']' 00:07:52.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.304 11:19:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.562 [2024-07-25 11:19:08.213330] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:07:52.562 [2024-07-25 11:19:08.213509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63923 ] 00:07:52.562 [2024-07-25 11:19:08.390261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.821 [2024-07-25 11:19:08.668643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.080 [2024-07-25 11:19:08.882654] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.080 [2024-07-25 11:19:08.882730] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.339 11:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.339 11:19:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:53.339 11:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:53.339 11:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:53.624 BaseBdev1_malloc 00:07:53.624 11:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:53.900 true 00:07:53.900 11:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:54.158 [2024-07-25 11:19:09.867482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:54.158 [2024-07-25 11:19:09.867579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.158 [2024-07-25 11:19:09.867634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:54.158 [2024-07-25 11:19:09.867654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.158 [2024-07-25 11:19:09.870410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.158 [2024-07-25 11:19:09.870455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:54.158 BaseBdev1 00:07:54.158 11:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:07:54.158 11:19:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:54.417 BaseBdev2_malloc 00:07:54.417 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:54.676 true 00:07:54.676 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:54.934 [2024-07-25 11:19:10.639435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:54.934 [2024-07-25 11:19:10.639518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.934 [2024-07-25 11:19:10.639569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:54.934 [2024-07-25 11:19:10.639586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.934 [2024-07-25 11:19:10.642395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.934 [2024-07-25 11:19:10.642441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:54.934 BaseBdev2 00:07:54.934 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:55.193 [2024-07-25 11:19:10.899578] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.193 [2024-07-25 11:19:10.902253] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.193 [2024-07-25 11:19:10.902531] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:55.193 [2024-07-25 11:19:10.902551] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.193 [2024-07-25 11:19:10.902937] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:55.193 [2024-07-25 11:19:10.903167] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:55.193 [2024-07-25 11:19:10.903188] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:55.193 [2024-07-25 11:19:10.903475] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.193 11:19:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.453 11:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:55.453 "name": "raid_bdev1", 00:07:55.453 "uuid": "498ac334-1869-426f-865c-6067eddddb7f", 00:07:55.453 "strip_size_kb": 64, 00:07:55.453 "state": "online", 00:07:55.453 "raid_level": "raid0", 00:07:55.453 "superblock": true, 00:07:55.453 "num_base_bdevs": 2, 00:07:55.453 "num_base_bdevs_discovered": 2, 00:07:55.453 "num_base_bdevs_operational": 2, 00:07:55.453 "base_bdevs_list": [ 00:07:55.453 { 00:07:55.453 "name": "BaseBdev1", 00:07:55.453 "uuid": "257d7e4b-8808-5b95-9b39-857f47804a9c", 00:07:55.453 "is_configured": true, 00:07:55.453 "data_offset": 2048, 00:07:55.453 "data_size": 63488 00:07:55.453 }, 00:07:55.453 { 00:07:55.453 "name": "BaseBdev2", 00:07:55.453 "uuid": "5ad854c1-f0b1-592c-ac85-0c2d02cd4858", 00:07:55.453 "is_configured": true, 00:07:55.453 "data_offset": 2048, 00:07:55.453 "data_size": 63488 00:07:55.453 } 00:07:55.453 ] 00:07:55.453 }' 00:07:55.453 11:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:55.453 11:19:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.019 11:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:07:56.019 11:19:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:56.019 [2024-07-25 11:19:11.893190] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:56.955 11:19:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:57.213 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:07:57.213 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:57.213 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:07:57.213 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:57.213 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:57.213 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:57.214 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:57.214 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:57.214 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:57.214 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:57.214 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:57.214 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:57.214 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:57.214 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.214 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.470 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:57.471 "name": "raid_bdev1", 00:07:57.471 "uuid": "498ac334-1869-426f-865c-6067eddddb7f", 00:07:57.471 "strip_size_kb": 64, 00:07:57.471 "state": "online", 00:07:57.471 "raid_level": "raid0", 00:07:57.471 "superblock": true, 00:07:57.471 "num_base_bdevs": 2, 00:07:57.471 "num_base_bdevs_discovered": 2, 00:07:57.471 "num_base_bdevs_operational": 2, 00:07:57.471 "base_bdevs_list": [ 00:07:57.471 { 00:07:57.471 "name": "BaseBdev1", 00:07:57.471 "uuid": "257d7e4b-8808-5b95-9b39-857f47804a9c", 00:07:57.471 "is_configured": true, 00:07:57.471 "data_offset": 2048, 00:07:57.471 "data_size": 63488 00:07:57.471 }, 00:07:57.471 { 00:07:57.471 "name": "BaseBdev2", 00:07:57.471 "uuid": "5ad854c1-f0b1-592c-ac85-0c2d02cd4858", 00:07:57.471 "is_configured": true, 00:07:57.471 "data_offset": 2048, 00:07:57.471 "data_size": 63488 00:07:57.471 } 00:07:57.471 ] 00:07:57.471 }' 00:07:57.471 11:19:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:57.471 11:19:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:58.433 [2024-07-25 11:19:14.258672] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.433 [2024-07-25 11:19:14.258715] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.433 0 00:07:58.433 [2024-07-25 11:19:14.261839] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.433 [2024-07-25 11:19:14.261894] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.433 [2024-07-25 11:19:14.261942] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.433 [2024-07-25 11:19:14.261956] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 63923 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63923 ']' 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63923 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63923 00:07:58.433 killing process with pid 63923 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63923' 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63923 00:07:58.433 11:19:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63923 00:07:58.433 [2024-07-25 11:19:14.303103] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.690 [2024-07-25 11:19:14.423184] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.059 11:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.DL45tNwXjQ 00:08:00.059 11:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:08:00.059 11:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:08:00.059 11:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.42 00:08:00.059 11:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:08:00.059 11:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:00.059 11:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:00.059 11:19:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.42 != \0\.\0\0 ]] 00:08:00.059 00:08:00.059 real 0m7.538s 00:08:00.059 user 0m11.265s 00:08:00.059 sys 0m0.926s 00:08:00.059 ************************************ 00:08:00.059 END TEST raid_write_error_test 00:08:00.059 ************************************ 00:08:00.059 11:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.059 11:19:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.060 11:19:15 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:08:00.060 11:19:15 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:00.060 11:19:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:00.060 11:19:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.060 11:19:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.060 ************************************ 00:08:00.060 START TEST raid_state_function_test 00:08:00.060 ************************************ 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:00.060 Process raid pid: 64104 00:08:00.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=64104 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 64104' 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 64104 /var/tmp/spdk-raid.sock 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 64104 ']' 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.060 11:19:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.060 [2024-07-25 11:19:15.795269] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:00.060 [2024-07-25 11:19:15.795700] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.317 [2024-07-25 11:19:15.971605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.574 [2024-07-25 11:19:16.205685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.574 [2024-07-25 11:19:16.409366] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.574 [2024-07-25 11:19:16.409608] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.139 11:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.139 11:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:01.139 11:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:01.396 [2024-07-25 11:19:17.029909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.396 [2024-07-25 11:19:17.029986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.396 [2024-07-25 11:19:17.030006] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.396 [2024-07-25 11:19:17.030020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:01.396 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.653 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:01.653 "name": "Existed_Raid", 00:08:01.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.653 "strip_size_kb": 64, 00:08:01.653 "state": "configuring", 00:08:01.653 "raid_level": "concat", 00:08:01.653 "superblock": false, 00:08:01.653 "num_base_bdevs": 2, 00:08:01.653 "num_base_bdevs_discovered": 0, 00:08:01.653 "num_base_bdevs_operational": 2, 00:08:01.653 "base_bdevs_list": [ 00:08:01.653 { 00:08:01.653 "name": "BaseBdev1", 00:08:01.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.653 "is_configured": false, 00:08:01.653 "data_offset": 0, 00:08:01.653 "data_size": 0 00:08:01.653 }, 00:08:01.653 { 00:08:01.653 "name": "BaseBdev2", 00:08:01.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.653 "is_configured": false, 00:08:01.653 "data_offset": 0, 00:08:01.653 "data_size": 0 00:08:01.653 } 00:08:01.653 ] 00:08:01.653 }' 00:08:01.653 11:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:01.653 11:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.217 11:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:02.474 [2024-07-25 11:19:18.318107] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.474 [2024-07-25 11:19:18.318163] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:02.474 11:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:02.731 [2024-07-25 11:19:18.582152] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.731 [2024-07-25 11:19:18.582212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.731 [2024-07-25 11:19:18.582232] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.731 [2024-07-25 11:19:18.582246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.731 11:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:03.296 [2024-07-25 11:19:18.887553] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.296 BaseBdev1 00:08:03.296 11:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:03.296 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:03.296 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:03.296 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:03.296 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:03.296 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:03.296 11:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:03.296 11:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.553 [ 00:08:03.553 { 00:08:03.553 "name": "BaseBdev1", 00:08:03.553 "aliases": [ 00:08:03.553 "4d77cdd4-6485-47aa-99cd-cf0aec3d5d3f" 00:08:03.553 ], 00:08:03.553 "product_name": "Malloc disk", 00:08:03.553 "block_size": 512, 00:08:03.553 "num_blocks": 65536, 00:08:03.553 "uuid": "4d77cdd4-6485-47aa-99cd-cf0aec3d5d3f", 00:08:03.553 "assigned_rate_limits": { 00:08:03.553 "rw_ios_per_sec": 0, 00:08:03.553 "rw_mbytes_per_sec": 0, 00:08:03.553 "r_mbytes_per_sec": 0, 00:08:03.553 "w_mbytes_per_sec": 0 00:08:03.553 }, 00:08:03.553 "claimed": true, 00:08:03.553 "claim_type": "exclusive_write", 00:08:03.553 "zoned": false, 00:08:03.553 "supported_io_types": { 00:08:03.553 "read": true, 00:08:03.553 "write": true, 00:08:03.553 "unmap": true, 00:08:03.553 "flush": true, 00:08:03.553 "reset": true, 00:08:03.553 "nvme_admin": false, 00:08:03.553 "nvme_io": false, 00:08:03.553 "nvme_io_md": false, 00:08:03.553 "write_zeroes": true, 00:08:03.553 "zcopy": true, 00:08:03.553 "get_zone_info": false, 00:08:03.553 "zone_management": false, 00:08:03.553 "zone_append": false, 00:08:03.553 "compare": false, 00:08:03.553 "compare_and_write": false, 00:08:03.553 "abort": true, 00:08:03.553 "seek_hole": false, 00:08:03.553 "seek_data": false, 00:08:03.553 "copy": true, 00:08:03.553 "nvme_iov_md": false 00:08:03.553 }, 00:08:03.553 "memory_domains": [ 00:08:03.553 { 00:08:03.553 "dma_device_id": "system", 00:08:03.553 "dma_device_type": 1 00:08:03.553 }, 00:08:03.553 { 00:08:03.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.553 "dma_device_type": 2 00:08:03.553 } 00:08:03.553 ], 00:08:03.553 "driver_specific": {} 00:08:03.553 } 00:08:03.553 ] 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:03.553 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.810 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:03.810 "name": "Existed_Raid", 00:08:03.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.810 "strip_size_kb": 64, 00:08:03.810 "state": "configuring", 00:08:03.810 "raid_level": "concat", 00:08:03.810 "superblock": false, 00:08:03.810 "num_base_bdevs": 2, 00:08:03.810 "num_base_bdevs_discovered": 1, 00:08:03.810 "num_base_bdevs_operational": 2, 00:08:03.810 "base_bdevs_list": [ 00:08:03.810 { 00:08:03.810 "name": "BaseBdev1", 00:08:03.810 "uuid": "4d77cdd4-6485-47aa-99cd-cf0aec3d5d3f", 00:08:03.810 "is_configured": true, 00:08:03.810 "data_offset": 0, 00:08:03.810 "data_size": 65536 00:08:03.810 }, 00:08:03.810 { 00:08:03.810 "name": "BaseBdev2", 00:08:03.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.810 "is_configured": false, 00:08:03.810 "data_offset": 0, 00:08:03.810 "data_size": 0 00:08:03.810 } 00:08:03.810 ] 00:08:03.810 }' 00:08:03.810 11:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:03.810 11:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.743 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:04.743 [2024-07-25 11:19:20.548063] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.743 [2024-07-25 11:19:20.548141] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:04.743 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:05.002 [2024-07-25 11:19:20.764154] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.002 [2024-07-25 11:19:20.766545] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.002 [2024-07-25 11:19:20.766606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.002 11:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.261 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:05.261 "name": "Existed_Raid", 00:08:05.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.261 "strip_size_kb": 64, 00:08:05.261 "state": "configuring", 00:08:05.261 "raid_level": "concat", 00:08:05.261 "superblock": false, 00:08:05.261 "num_base_bdevs": 2, 00:08:05.261 "num_base_bdevs_discovered": 1, 00:08:05.261 "num_base_bdevs_operational": 2, 00:08:05.261 "base_bdevs_list": [ 00:08:05.261 { 00:08:05.261 "name": "BaseBdev1", 00:08:05.261 "uuid": "4d77cdd4-6485-47aa-99cd-cf0aec3d5d3f", 00:08:05.261 "is_configured": true, 00:08:05.261 "data_offset": 0, 00:08:05.261 "data_size": 65536 00:08:05.261 }, 00:08:05.261 { 00:08:05.261 "name": "BaseBdev2", 00:08:05.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.261 "is_configured": false, 00:08:05.261 "data_offset": 0, 00:08:05.261 "data_size": 0 00:08:05.261 } 00:08:05.261 ] 00:08:05.261 }' 00:08:05.261 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:05.261 11:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.827 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:06.084 [2024-07-25 11:19:21.952768] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.084 [2024-07-25 11:19:21.953154] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:06.084 [2024-07-25 11:19:21.953219] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:06.084 [2024-07-25 11:19:21.953706] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:06.084 [2024-07-25 11:19:21.954068] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:06.085 [2024-07-25 11:19:21.954204] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:06.085 [2024-07-25 11:19:21.954800] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.085 BaseBdev2 00:08:06.343 11:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:06.343 11:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:06.343 11:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.343 11:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:06.343 11:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.343 11:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.343 11:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:06.343 11:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:06.601 [ 00:08:06.601 { 00:08:06.601 "name": "BaseBdev2", 00:08:06.601 "aliases": [ 00:08:06.601 "1683ddd1-6b53-4253-90a1-7abff05b9ee3" 00:08:06.601 ], 00:08:06.601 "product_name": "Malloc disk", 00:08:06.601 "block_size": 512, 00:08:06.601 "num_blocks": 65536, 00:08:06.601 "uuid": "1683ddd1-6b53-4253-90a1-7abff05b9ee3", 00:08:06.601 "assigned_rate_limits": { 00:08:06.601 "rw_ios_per_sec": 0, 00:08:06.601 "rw_mbytes_per_sec": 0, 00:08:06.601 "r_mbytes_per_sec": 0, 00:08:06.601 "w_mbytes_per_sec": 0 00:08:06.601 }, 00:08:06.601 "claimed": true, 00:08:06.601 "claim_type": "exclusive_write", 00:08:06.601 "zoned": false, 00:08:06.601 "supported_io_types": { 00:08:06.601 "read": true, 00:08:06.601 "write": true, 00:08:06.601 "unmap": true, 00:08:06.601 "flush": true, 00:08:06.601 "reset": true, 00:08:06.601 "nvme_admin": false, 00:08:06.601 "nvme_io": false, 00:08:06.601 "nvme_io_md": false, 00:08:06.601 "write_zeroes": true, 00:08:06.601 "zcopy": true, 00:08:06.601 "get_zone_info": false, 00:08:06.601 "zone_management": false, 00:08:06.601 "zone_append": false, 00:08:06.601 "compare": false, 00:08:06.601 "compare_and_write": false, 00:08:06.601 "abort": true, 00:08:06.601 "seek_hole": false, 00:08:06.601 "seek_data": false, 00:08:06.601 "copy": true, 00:08:06.601 "nvme_iov_md": false 00:08:06.601 }, 00:08:06.601 "memory_domains": [ 00:08:06.601 { 00:08:06.601 "dma_device_id": "system", 00:08:06.601 "dma_device_type": 1 00:08:06.601 }, 00:08:06.601 { 00:08:06.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.601 "dma_device_type": 2 00:08:06.601 } 00:08:06.601 ], 00:08:06.601 "driver_specific": {} 00:08:06.601 } 00:08:06.601 ] 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.601 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.859 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:06.859 "name": "Existed_Raid", 00:08:06.859 "uuid": "af406849-3e4c-4791-aa30-1f605c5d25ad", 00:08:06.859 "strip_size_kb": 64, 00:08:06.859 "state": "online", 00:08:06.859 "raid_level": "concat", 00:08:06.859 "superblock": false, 00:08:06.859 "num_base_bdevs": 2, 00:08:06.859 "num_base_bdevs_discovered": 2, 00:08:06.860 "num_base_bdevs_operational": 2, 00:08:06.860 "base_bdevs_list": [ 00:08:06.860 { 00:08:06.860 "name": "BaseBdev1", 00:08:06.860 "uuid": "4d77cdd4-6485-47aa-99cd-cf0aec3d5d3f", 00:08:06.860 "is_configured": true, 00:08:06.860 "data_offset": 0, 00:08:06.860 "data_size": 65536 00:08:06.860 }, 00:08:06.860 { 00:08:06.860 "name": "BaseBdev2", 00:08:06.860 "uuid": "1683ddd1-6b53-4253-90a1-7abff05b9ee3", 00:08:06.860 "is_configured": true, 00:08:06.860 "data_offset": 0, 00:08:06.860 "data_size": 65536 00:08:06.860 } 00:08:06.860 ] 00:08:06.860 }' 00:08:06.860 11:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:06.860 11:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.792 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:07.793 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:07.793 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:07.793 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:07.793 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:07.793 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:07.793 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:07.793 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:07.793 [2024-07-25 11:19:23.613750] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.793 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:07.793 "name": "Existed_Raid", 00:08:07.793 "aliases": [ 00:08:07.793 "af406849-3e4c-4791-aa30-1f605c5d25ad" 00:08:07.793 ], 00:08:07.793 "product_name": "Raid Volume", 00:08:07.793 "block_size": 512, 00:08:07.793 "num_blocks": 131072, 00:08:07.793 "uuid": "af406849-3e4c-4791-aa30-1f605c5d25ad", 00:08:07.793 "assigned_rate_limits": { 00:08:07.793 "rw_ios_per_sec": 0, 00:08:07.793 "rw_mbytes_per_sec": 0, 00:08:07.793 "r_mbytes_per_sec": 0, 00:08:07.793 "w_mbytes_per_sec": 0 00:08:07.793 }, 00:08:07.793 "claimed": false, 00:08:07.793 "zoned": false, 00:08:07.793 "supported_io_types": { 00:08:07.793 "read": true, 00:08:07.793 "write": true, 00:08:07.793 "unmap": true, 00:08:07.793 "flush": true, 00:08:07.793 "reset": true, 00:08:07.793 "nvme_admin": false, 00:08:07.793 "nvme_io": false, 00:08:07.793 "nvme_io_md": false, 00:08:07.793 "write_zeroes": true, 00:08:07.793 "zcopy": false, 00:08:07.793 "get_zone_info": false, 00:08:07.793 "zone_management": false, 00:08:07.793 "zone_append": false, 00:08:07.793 "compare": false, 00:08:07.793 "compare_and_write": false, 00:08:07.793 "abort": false, 00:08:07.793 "seek_hole": false, 00:08:07.793 "seek_data": false, 00:08:07.793 "copy": false, 00:08:07.793 "nvme_iov_md": false 00:08:07.793 }, 00:08:07.793 "memory_domains": [ 00:08:07.793 { 00:08:07.793 "dma_device_id": "system", 00:08:07.793 "dma_device_type": 1 00:08:07.793 }, 00:08:07.793 { 00:08:07.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.793 "dma_device_type": 2 00:08:07.793 }, 00:08:07.793 { 00:08:07.793 "dma_device_id": "system", 00:08:07.793 "dma_device_type": 1 00:08:07.793 }, 00:08:07.793 { 00:08:07.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.793 "dma_device_type": 2 00:08:07.793 } 00:08:07.793 ], 00:08:07.793 "driver_specific": { 00:08:07.793 "raid": { 00:08:07.793 "uuid": "af406849-3e4c-4791-aa30-1f605c5d25ad", 00:08:07.793 "strip_size_kb": 64, 00:08:07.793 "state": "online", 00:08:07.793 "raid_level": "concat", 00:08:07.793 "superblock": false, 00:08:07.793 "num_base_bdevs": 2, 00:08:07.793 "num_base_bdevs_discovered": 2, 00:08:07.793 "num_base_bdevs_operational": 2, 00:08:07.793 "base_bdevs_list": [ 00:08:07.793 { 00:08:07.793 "name": "BaseBdev1", 00:08:07.793 "uuid": "4d77cdd4-6485-47aa-99cd-cf0aec3d5d3f", 00:08:07.793 "is_configured": true, 00:08:07.793 "data_offset": 0, 00:08:07.793 "data_size": 65536 00:08:07.793 }, 00:08:07.793 { 00:08:07.793 "name": "BaseBdev2", 00:08:07.793 "uuid": "1683ddd1-6b53-4253-90a1-7abff05b9ee3", 00:08:07.793 "is_configured": true, 00:08:07.793 "data_offset": 0, 00:08:07.793 "data_size": 65536 00:08:07.793 } 00:08:07.793 ] 00:08:07.793 } 00:08:07.793 } 00:08:07.793 }' 00:08:07.793 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.051 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:08.051 BaseBdev2' 00:08:08.051 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:08.051 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:08.051 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:08.309 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:08.309 "name": "BaseBdev1", 00:08:08.309 "aliases": [ 00:08:08.309 "4d77cdd4-6485-47aa-99cd-cf0aec3d5d3f" 00:08:08.309 ], 00:08:08.309 "product_name": "Malloc disk", 00:08:08.309 "block_size": 512, 00:08:08.309 "num_blocks": 65536, 00:08:08.309 "uuid": "4d77cdd4-6485-47aa-99cd-cf0aec3d5d3f", 00:08:08.309 "assigned_rate_limits": { 00:08:08.309 "rw_ios_per_sec": 0, 00:08:08.309 "rw_mbytes_per_sec": 0, 00:08:08.309 "r_mbytes_per_sec": 0, 00:08:08.309 "w_mbytes_per_sec": 0 00:08:08.309 }, 00:08:08.309 "claimed": true, 00:08:08.309 "claim_type": "exclusive_write", 00:08:08.309 "zoned": false, 00:08:08.309 "supported_io_types": { 00:08:08.309 "read": true, 00:08:08.309 "write": true, 00:08:08.309 "unmap": true, 00:08:08.309 "flush": true, 00:08:08.309 "reset": true, 00:08:08.309 "nvme_admin": false, 00:08:08.309 "nvme_io": false, 00:08:08.309 "nvme_io_md": false, 00:08:08.309 "write_zeroes": true, 00:08:08.309 "zcopy": true, 00:08:08.309 "get_zone_info": false, 00:08:08.309 "zone_management": false, 00:08:08.309 "zone_append": false, 00:08:08.309 "compare": false, 00:08:08.309 "compare_and_write": false, 00:08:08.309 "abort": true, 00:08:08.309 "seek_hole": false, 00:08:08.309 "seek_data": false, 00:08:08.309 "copy": true, 00:08:08.309 "nvme_iov_md": false 00:08:08.309 }, 00:08:08.309 "memory_domains": [ 00:08:08.309 { 00:08:08.309 "dma_device_id": "system", 00:08:08.309 "dma_device_type": 1 00:08:08.309 }, 00:08:08.309 { 00:08:08.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.309 "dma_device_type": 2 00:08:08.309 } 00:08:08.309 ], 00:08:08.309 "driver_specific": {} 00:08:08.309 }' 00:08:08.309 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:08.309 11:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:08.309 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:08.309 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:08.309 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:08.309 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:08.309 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:08.568 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:08.568 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:08.568 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:08.568 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:08.568 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:08.568 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:08.568 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:08.568 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:08.827 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:08.827 "name": "BaseBdev2", 00:08:08.827 "aliases": [ 00:08:08.827 "1683ddd1-6b53-4253-90a1-7abff05b9ee3" 00:08:08.827 ], 00:08:08.827 "product_name": "Malloc disk", 00:08:08.827 "block_size": 512, 00:08:08.827 "num_blocks": 65536, 00:08:08.827 "uuid": "1683ddd1-6b53-4253-90a1-7abff05b9ee3", 00:08:08.827 "assigned_rate_limits": { 00:08:08.827 "rw_ios_per_sec": 0, 00:08:08.827 "rw_mbytes_per_sec": 0, 00:08:08.827 "r_mbytes_per_sec": 0, 00:08:08.827 "w_mbytes_per_sec": 0 00:08:08.827 }, 00:08:08.827 "claimed": true, 00:08:08.827 "claim_type": "exclusive_write", 00:08:08.827 "zoned": false, 00:08:08.827 "supported_io_types": { 00:08:08.827 "read": true, 00:08:08.827 "write": true, 00:08:08.827 "unmap": true, 00:08:08.827 "flush": true, 00:08:08.827 "reset": true, 00:08:08.827 "nvme_admin": false, 00:08:08.827 "nvme_io": false, 00:08:08.827 "nvme_io_md": false, 00:08:08.827 "write_zeroes": true, 00:08:08.827 "zcopy": true, 00:08:08.827 "get_zone_info": false, 00:08:08.827 "zone_management": false, 00:08:08.827 "zone_append": false, 00:08:08.827 "compare": false, 00:08:08.827 "compare_and_write": false, 00:08:08.827 "abort": true, 00:08:08.827 "seek_hole": false, 00:08:08.827 "seek_data": false, 00:08:08.827 "copy": true, 00:08:08.827 "nvme_iov_md": false 00:08:08.827 }, 00:08:08.827 "memory_domains": [ 00:08:08.827 { 00:08:08.827 "dma_device_id": "system", 00:08:08.827 "dma_device_type": 1 00:08:08.827 }, 00:08:08.827 { 00:08:08.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.827 "dma_device_type": 2 00:08:08.827 } 00:08:08.827 ], 00:08:08.827 "driver_specific": {} 00:08:08.827 }' 00:08:08.827 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:09.086 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:09.086 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:09.086 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:09.086 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:09.086 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:09.086 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:09.086 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:09.086 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:09.086 11:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:09.344 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:09.344 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:09.344 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:09.603 [2024-07-25 11:19:25.337912] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.603 [2024-07-25 11:19:25.337956] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.603 [2024-07-25 11:19:25.338035] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:09.603 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.862 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:09.862 "name": "Existed_Raid", 00:08:09.862 "uuid": "af406849-3e4c-4791-aa30-1f605c5d25ad", 00:08:09.862 "strip_size_kb": 64, 00:08:09.862 "state": "offline", 00:08:09.862 "raid_level": "concat", 00:08:09.862 "superblock": false, 00:08:09.862 "num_base_bdevs": 2, 00:08:09.862 "num_base_bdevs_discovered": 1, 00:08:09.862 "num_base_bdevs_operational": 1, 00:08:09.862 "base_bdevs_list": [ 00:08:09.862 { 00:08:09.862 "name": null, 00:08:09.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.862 "is_configured": false, 00:08:09.862 "data_offset": 0, 00:08:09.862 "data_size": 65536 00:08:09.862 }, 00:08:09.862 { 00:08:09.862 "name": "BaseBdev2", 00:08:09.862 "uuid": "1683ddd1-6b53-4253-90a1-7abff05b9ee3", 00:08:09.862 "is_configured": true, 00:08:09.862 "data_offset": 0, 00:08:09.862 "data_size": 65536 00:08:09.862 } 00:08:09.862 ] 00:08:09.862 }' 00:08:09.862 11:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:09.862 11:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.796 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:10.796 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:10.796 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:10.796 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.796 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:10.796 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.796 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:11.053 [2024-07-25 11:19:26.794852] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.053 [2024-07-25 11:19:26.794937] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:11.053 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:11.053 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:11.053 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:11.053 11:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:11.311 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:11.311 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:11.311 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:11.311 11:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 64104 00:08:11.311 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 64104 ']' 00:08:11.311 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 64104 00:08:11.311 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:11.311 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.311 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64104 00:08:11.569 killing process with pid 64104 00:08:11.569 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.569 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.569 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64104' 00:08:11.569 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 64104 00:08:11.569 [2024-07-25 11:19:27.200896] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.569 11:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 64104 00:08:11.569 [2024-07-25 11:19:27.215519] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.502 11:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:12.502 00:08:12.502 real 0m12.689s 00:08:12.502 user 0m22.146s 00:08:12.502 sys 0m1.632s 00:08:12.502 11:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.502 ************************************ 00:08:12.502 END TEST raid_state_function_test 00:08:12.502 ************************************ 00:08:12.502 11:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.761 11:19:28 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:12.761 11:19:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:12.761 11:19:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.761 11:19:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.761 ************************************ 00:08:12.761 START TEST raid_state_function_test_sb 00:08:12.761 ************************************ 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:12.761 Process raid pid: 64478 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=64478 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 64478' 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 64478 /var/tmp/spdk-raid.sock 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64478 ']' 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:12.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.761 11:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.761 [2024-07-25 11:19:28.526706] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:12.761 [2024-07-25 11:19:28.526849] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.019 [2024-07-25 11:19:28.685228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.277 [2024-07-25 11:19:28.925867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.277 [2024-07-25 11:19:29.129690] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.277 [2024-07-25 11:19:29.129742] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.536 11:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.536 11:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:13.536 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:13.795 [2024-07-25 11:19:29.633258] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.795 [2024-07-25 11:19:29.633333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.795 [2024-07-25 11:19:29.633354] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.795 [2024-07-25 11:19:29.633367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:13.795 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.055 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:14.055 "name": "Existed_Raid", 00:08:14.055 "uuid": "da36c76a-0bf4-4c9e-ac21-93091e6dec7f", 00:08:14.055 "strip_size_kb": 64, 00:08:14.055 "state": "configuring", 00:08:14.055 "raid_level": "concat", 00:08:14.055 "superblock": true, 00:08:14.055 "num_base_bdevs": 2, 00:08:14.055 "num_base_bdevs_discovered": 0, 00:08:14.055 "num_base_bdevs_operational": 2, 00:08:14.055 "base_bdevs_list": [ 00:08:14.055 { 00:08:14.055 "name": "BaseBdev1", 00:08:14.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.055 "is_configured": false, 00:08:14.055 "data_offset": 0, 00:08:14.055 "data_size": 0 00:08:14.055 }, 00:08:14.055 { 00:08:14.055 "name": "BaseBdev2", 00:08:14.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.055 "is_configured": false, 00:08:14.055 "data_offset": 0, 00:08:14.055 "data_size": 0 00:08:14.055 } 00:08:14.055 ] 00:08:14.055 }' 00:08:14.055 11:19:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:14.055 11:19:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.010 11:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:15.010 [2024-07-25 11:19:30.765389] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:15.010 [2024-07-25 11:19:30.765431] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:15.010 11:19:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:15.269 [2024-07-25 11:19:31.049520] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.269 [2024-07-25 11:19:31.049585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.269 [2024-07-25 11:19:31.049605] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.269 [2024-07-25 11:19:31.049638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.269 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:15.528 [2024-07-25 11:19:31.350574] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.528 BaseBdev1 00:08:15.528 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:15.528 11:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:15.528 11:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.528 11:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:15.528 11:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.528 11:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.528 11:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:15.786 11:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.044 [ 00:08:16.044 { 00:08:16.044 "name": "BaseBdev1", 00:08:16.044 "aliases": [ 00:08:16.044 "45480edd-0821-4652-aa5c-9caa635b6b4b" 00:08:16.044 ], 00:08:16.044 "product_name": "Malloc disk", 00:08:16.044 "block_size": 512, 00:08:16.044 "num_blocks": 65536, 00:08:16.044 "uuid": "45480edd-0821-4652-aa5c-9caa635b6b4b", 00:08:16.044 "assigned_rate_limits": { 00:08:16.044 "rw_ios_per_sec": 0, 00:08:16.044 "rw_mbytes_per_sec": 0, 00:08:16.044 "r_mbytes_per_sec": 0, 00:08:16.044 "w_mbytes_per_sec": 0 00:08:16.044 }, 00:08:16.044 "claimed": true, 00:08:16.044 "claim_type": "exclusive_write", 00:08:16.044 "zoned": false, 00:08:16.044 "supported_io_types": { 00:08:16.044 "read": true, 00:08:16.044 "write": true, 00:08:16.044 "unmap": true, 00:08:16.044 "flush": true, 00:08:16.044 "reset": true, 00:08:16.044 "nvme_admin": false, 00:08:16.044 "nvme_io": false, 00:08:16.044 "nvme_io_md": false, 00:08:16.044 "write_zeroes": true, 00:08:16.044 "zcopy": true, 00:08:16.044 "get_zone_info": false, 00:08:16.044 "zone_management": false, 00:08:16.044 "zone_append": false, 00:08:16.044 "compare": false, 00:08:16.044 "compare_and_write": false, 00:08:16.044 "abort": true, 00:08:16.044 "seek_hole": false, 00:08:16.044 "seek_data": false, 00:08:16.044 "copy": true, 00:08:16.044 "nvme_iov_md": false 00:08:16.044 }, 00:08:16.044 "memory_domains": [ 00:08:16.044 { 00:08:16.044 "dma_device_id": "system", 00:08:16.044 "dma_device_type": 1 00:08:16.044 }, 00:08:16.044 { 00:08:16.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.044 "dma_device_type": 2 00:08:16.044 } 00:08:16.044 ], 00:08:16.044 "driver_specific": {} 00:08:16.044 } 00:08:16.044 ] 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.044 11:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.302 11:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:16.302 "name": "Existed_Raid", 00:08:16.302 "uuid": "0fc00a39-8e4a-4043-aee7-a7458dfce827", 00:08:16.302 "strip_size_kb": 64, 00:08:16.302 "state": "configuring", 00:08:16.302 "raid_level": "concat", 00:08:16.302 "superblock": true, 00:08:16.302 "num_base_bdevs": 2, 00:08:16.302 "num_base_bdevs_discovered": 1, 00:08:16.302 "num_base_bdevs_operational": 2, 00:08:16.302 "base_bdevs_list": [ 00:08:16.302 { 00:08:16.302 "name": "BaseBdev1", 00:08:16.302 "uuid": "45480edd-0821-4652-aa5c-9caa635b6b4b", 00:08:16.302 "is_configured": true, 00:08:16.302 "data_offset": 2048, 00:08:16.302 "data_size": 63488 00:08:16.302 }, 00:08:16.302 { 00:08:16.302 "name": "BaseBdev2", 00:08:16.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.302 "is_configured": false, 00:08:16.302 "data_offset": 0, 00:08:16.302 "data_size": 0 00:08:16.302 } 00:08:16.302 ] 00:08:16.302 }' 00:08:16.302 11:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:16.302 11:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.239 11:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:17.239 [2024-07-25 11:19:33.011111] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.239 [2024-07-25 11:19:33.011185] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:17.239 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:17.498 [2024-07-25 11:19:33.287236] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.498 [2024-07-25 11:19:33.289582] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.498 [2024-07-25 11:19:33.289639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.498 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.756 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:17.756 "name": "Existed_Raid", 00:08:17.756 "uuid": "8222141e-fabf-4d32-9dbc-809c4dc56c16", 00:08:17.756 "strip_size_kb": 64, 00:08:17.756 "state": "configuring", 00:08:17.756 "raid_level": "concat", 00:08:17.756 "superblock": true, 00:08:17.756 "num_base_bdevs": 2, 00:08:17.756 "num_base_bdevs_discovered": 1, 00:08:17.756 "num_base_bdevs_operational": 2, 00:08:17.756 "base_bdevs_list": [ 00:08:17.756 { 00:08:17.756 "name": "BaseBdev1", 00:08:17.756 "uuid": "45480edd-0821-4652-aa5c-9caa635b6b4b", 00:08:17.756 "is_configured": true, 00:08:17.756 "data_offset": 2048, 00:08:17.756 "data_size": 63488 00:08:17.756 }, 00:08:17.756 { 00:08:17.757 "name": "BaseBdev2", 00:08:17.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.757 "is_configured": false, 00:08:17.757 "data_offset": 0, 00:08:17.757 "data_size": 0 00:08:17.757 } 00:08:17.757 ] 00:08:17.757 }' 00:08:17.757 11:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:17.757 11:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.322 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:18.889 [2024-07-25 11:19:34.467297] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.889 [2024-07-25 11:19:34.467899] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:18.889 [2024-07-25 11:19:34.468065] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:18.889 [2024-07-25 11:19:34.468449] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.889 [2024-07-25 11:19:34.468789] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:18.889 BaseBdev2 00:08:18.889 [2024-07-25 11:19:34.468945] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:18.889 [2024-07-25 11:19:34.469275] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.889 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:18.889 11:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:18.889 11:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.889 11:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:18.889 11:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.889 11:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.889 11:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:18.889 11:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:19.147 [ 00:08:19.147 { 00:08:19.147 "name": "BaseBdev2", 00:08:19.147 "aliases": [ 00:08:19.147 "74155c05-663c-43a8-bb31-4114525ef240" 00:08:19.147 ], 00:08:19.147 "product_name": "Malloc disk", 00:08:19.147 "block_size": 512, 00:08:19.147 "num_blocks": 65536, 00:08:19.147 "uuid": "74155c05-663c-43a8-bb31-4114525ef240", 00:08:19.147 "assigned_rate_limits": { 00:08:19.147 "rw_ios_per_sec": 0, 00:08:19.147 "rw_mbytes_per_sec": 0, 00:08:19.147 "r_mbytes_per_sec": 0, 00:08:19.147 "w_mbytes_per_sec": 0 00:08:19.147 }, 00:08:19.147 "claimed": true, 00:08:19.147 "claim_type": "exclusive_write", 00:08:19.147 "zoned": false, 00:08:19.147 "supported_io_types": { 00:08:19.147 "read": true, 00:08:19.147 "write": true, 00:08:19.147 "unmap": true, 00:08:19.147 "flush": true, 00:08:19.147 "reset": true, 00:08:19.147 "nvme_admin": false, 00:08:19.147 "nvme_io": false, 00:08:19.147 "nvme_io_md": false, 00:08:19.147 "write_zeroes": true, 00:08:19.147 "zcopy": true, 00:08:19.147 "get_zone_info": false, 00:08:19.147 "zone_management": false, 00:08:19.147 "zone_append": false, 00:08:19.147 "compare": false, 00:08:19.147 "compare_and_write": false, 00:08:19.147 "abort": true, 00:08:19.147 "seek_hole": false, 00:08:19.147 "seek_data": false, 00:08:19.147 "copy": true, 00:08:19.147 "nvme_iov_md": false 00:08:19.147 }, 00:08:19.147 "memory_domains": [ 00:08:19.147 { 00:08:19.147 "dma_device_id": "system", 00:08:19.147 "dma_device_type": 1 00:08:19.147 }, 00:08:19.147 { 00:08:19.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.147 "dma_device_type": 2 00:08:19.147 } 00:08:19.147 ], 00:08:19.147 "driver_specific": {} 00:08:19.147 } 00:08:19.147 ] 00:08:19.147 11:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:19.147 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.148 11:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:19.406 11:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:19.406 "name": "Existed_Raid", 00:08:19.406 "uuid": "8222141e-fabf-4d32-9dbc-809c4dc56c16", 00:08:19.406 "strip_size_kb": 64, 00:08:19.406 "state": "online", 00:08:19.406 "raid_level": "concat", 00:08:19.406 "superblock": true, 00:08:19.406 "num_base_bdevs": 2, 00:08:19.406 "num_base_bdevs_discovered": 2, 00:08:19.406 "num_base_bdevs_operational": 2, 00:08:19.406 "base_bdevs_list": [ 00:08:19.406 { 00:08:19.406 "name": "BaseBdev1", 00:08:19.406 "uuid": "45480edd-0821-4652-aa5c-9caa635b6b4b", 00:08:19.406 "is_configured": true, 00:08:19.406 "data_offset": 2048, 00:08:19.406 "data_size": 63488 00:08:19.406 }, 00:08:19.406 { 00:08:19.406 "name": "BaseBdev2", 00:08:19.406 "uuid": "74155c05-663c-43a8-bb31-4114525ef240", 00:08:19.406 "is_configured": true, 00:08:19.406 "data_offset": 2048, 00:08:19.406 "data_size": 63488 00:08:19.406 } 00:08:19.406 ] 00:08:19.406 }' 00:08:19.406 11:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:19.406 11:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.343 11:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:20.343 11:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:20.343 11:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:20.343 11:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:20.343 11:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:20.343 11:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:20.343 11:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:20.343 11:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:20.343 [2024-07-25 11:19:36.080183] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.343 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:20.343 "name": "Existed_Raid", 00:08:20.343 "aliases": [ 00:08:20.343 "8222141e-fabf-4d32-9dbc-809c4dc56c16" 00:08:20.343 ], 00:08:20.343 "product_name": "Raid Volume", 00:08:20.343 "block_size": 512, 00:08:20.343 "num_blocks": 126976, 00:08:20.343 "uuid": "8222141e-fabf-4d32-9dbc-809c4dc56c16", 00:08:20.343 "assigned_rate_limits": { 00:08:20.343 "rw_ios_per_sec": 0, 00:08:20.343 "rw_mbytes_per_sec": 0, 00:08:20.343 "r_mbytes_per_sec": 0, 00:08:20.343 "w_mbytes_per_sec": 0 00:08:20.343 }, 00:08:20.343 "claimed": false, 00:08:20.343 "zoned": false, 00:08:20.343 "supported_io_types": { 00:08:20.343 "read": true, 00:08:20.343 "write": true, 00:08:20.343 "unmap": true, 00:08:20.343 "flush": true, 00:08:20.343 "reset": true, 00:08:20.343 "nvme_admin": false, 00:08:20.343 "nvme_io": false, 00:08:20.343 "nvme_io_md": false, 00:08:20.343 "write_zeroes": true, 00:08:20.343 "zcopy": false, 00:08:20.344 "get_zone_info": false, 00:08:20.344 "zone_management": false, 00:08:20.344 "zone_append": false, 00:08:20.344 "compare": false, 00:08:20.344 "compare_and_write": false, 00:08:20.344 "abort": false, 00:08:20.344 "seek_hole": false, 00:08:20.344 "seek_data": false, 00:08:20.344 "copy": false, 00:08:20.344 "nvme_iov_md": false 00:08:20.344 }, 00:08:20.344 "memory_domains": [ 00:08:20.344 { 00:08:20.344 "dma_device_id": "system", 00:08:20.344 "dma_device_type": 1 00:08:20.344 }, 00:08:20.344 { 00:08:20.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.344 "dma_device_type": 2 00:08:20.344 }, 00:08:20.344 { 00:08:20.344 "dma_device_id": "system", 00:08:20.344 "dma_device_type": 1 00:08:20.344 }, 00:08:20.344 { 00:08:20.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.344 "dma_device_type": 2 00:08:20.344 } 00:08:20.344 ], 00:08:20.344 "driver_specific": { 00:08:20.344 "raid": { 00:08:20.344 "uuid": "8222141e-fabf-4d32-9dbc-809c4dc56c16", 00:08:20.344 "strip_size_kb": 64, 00:08:20.344 "state": "online", 00:08:20.344 "raid_level": "concat", 00:08:20.344 "superblock": true, 00:08:20.344 "num_base_bdevs": 2, 00:08:20.344 "num_base_bdevs_discovered": 2, 00:08:20.344 "num_base_bdevs_operational": 2, 00:08:20.344 "base_bdevs_list": [ 00:08:20.344 { 00:08:20.344 "name": "BaseBdev1", 00:08:20.344 "uuid": "45480edd-0821-4652-aa5c-9caa635b6b4b", 00:08:20.344 "is_configured": true, 00:08:20.344 "data_offset": 2048, 00:08:20.344 "data_size": 63488 00:08:20.344 }, 00:08:20.344 { 00:08:20.344 "name": "BaseBdev2", 00:08:20.344 "uuid": "74155c05-663c-43a8-bb31-4114525ef240", 00:08:20.344 "is_configured": true, 00:08:20.344 "data_offset": 2048, 00:08:20.344 "data_size": 63488 00:08:20.344 } 00:08:20.344 ] 00:08:20.344 } 00:08:20.344 } 00:08:20.344 }' 00:08:20.344 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.344 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:20.344 BaseBdev2' 00:08:20.344 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:20.344 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:20.344 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:20.603 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:20.603 "name": "BaseBdev1", 00:08:20.603 "aliases": [ 00:08:20.603 "45480edd-0821-4652-aa5c-9caa635b6b4b" 00:08:20.603 ], 00:08:20.603 "product_name": "Malloc disk", 00:08:20.603 "block_size": 512, 00:08:20.603 "num_blocks": 65536, 00:08:20.603 "uuid": "45480edd-0821-4652-aa5c-9caa635b6b4b", 00:08:20.603 "assigned_rate_limits": { 00:08:20.603 "rw_ios_per_sec": 0, 00:08:20.603 "rw_mbytes_per_sec": 0, 00:08:20.603 "r_mbytes_per_sec": 0, 00:08:20.603 "w_mbytes_per_sec": 0 00:08:20.603 }, 00:08:20.603 "claimed": true, 00:08:20.603 "claim_type": "exclusive_write", 00:08:20.603 "zoned": false, 00:08:20.603 "supported_io_types": { 00:08:20.603 "read": true, 00:08:20.603 "write": true, 00:08:20.603 "unmap": true, 00:08:20.603 "flush": true, 00:08:20.603 "reset": true, 00:08:20.603 "nvme_admin": false, 00:08:20.603 "nvme_io": false, 00:08:20.603 "nvme_io_md": false, 00:08:20.603 "write_zeroes": true, 00:08:20.603 "zcopy": true, 00:08:20.603 "get_zone_info": false, 00:08:20.603 "zone_management": false, 00:08:20.603 "zone_append": false, 00:08:20.603 "compare": false, 00:08:20.603 "compare_and_write": false, 00:08:20.603 "abort": true, 00:08:20.603 "seek_hole": false, 00:08:20.603 "seek_data": false, 00:08:20.603 "copy": true, 00:08:20.603 "nvme_iov_md": false 00:08:20.603 }, 00:08:20.603 "memory_domains": [ 00:08:20.603 { 00:08:20.603 "dma_device_id": "system", 00:08:20.603 "dma_device_type": 1 00:08:20.603 }, 00:08:20.603 { 00:08:20.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.603 "dma_device_type": 2 00:08:20.603 } 00:08:20.603 ], 00:08:20.603 "driver_specific": {} 00:08:20.603 }' 00:08:20.603 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:20.862 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:20.862 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:20.862 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:20.862 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:20.862 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:20.862 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:20.862 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:21.121 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:21.121 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:21.121 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:21.121 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:21.121 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:21.121 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:21.121 11:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:21.380 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:21.380 "name": "BaseBdev2", 00:08:21.380 "aliases": [ 00:08:21.380 "74155c05-663c-43a8-bb31-4114525ef240" 00:08:21.380 ], 00:08:21.380 "product_name": "Malloc disk", 00:08:21.380 "block_size": 512, 00:08:21.380 "num_blocks": 65536, 00:08:21.380 "uuid": "74155c05-663c-43a8-bb31-4114525ef240", 00:08:21.380 "assigned_rate_limits": { 00:08:21.380 "rw_ios_per_sec": 0, 00:08:21.380 "rw_mbytes_per_sec": 0, 00:08:21.380 "r_mbytes_per_sec": 0, 00:08:21.380 "w_mbytes_per_sec": 0 00:08:21.380 }, 00:08:21.380 "claimed": true, 00:08:21.380 "claim_type": "exclusive_write", 00:08:21.380 "zoned": false, 00:08:21.380 "supported_io_types": { 00:08:21.380 "read": true, 00:08:21.380 "write": true, 00:08:21.380 "unmap": true, 00:08:21.380 "flush": true, 00:08:21.380 "reset": true, 00:08:21.380 "nvme_admin": false, 00:08:21.380 "nvme_io": false, 00:08:21.380 "nvme_io_md": false, 00:08:21.380 "write_zeroes": true, 00:08:21.380 "zcopy": true, 00:08:21.380 "get_zone_info": false, 00:08:21.380 "zone_management": false, 00:08:21.380 "zone_append": false, 00:08:21.380 "compare": false, 00:08:21.380 "compare_and_write": false, 00:08:21.380 "abort": true, 00:08:21.380 "seek_hole": false, 00:08:21.380 "seek_data": false, 00:08:21.380 "copy": true, 00:08:21.380 "nvme_iov_md": false 00:08:21.380 }, 00:08:21.380 "memory_domains": [ 00:08:21.380 { 00:08:21.380 "dma_device_id": "system", 00:08:21.380 "dma_device_type": 1 00:08:21.380 }, 00:08:21.380 { 00:08:21.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.380 "dma_device_type": 2 00:08:21.380 } 00:08:21.380 ], 00:08:21.380 "driver_specific": {} 00:08:21.380 }' 00:08:21.380 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:21.380 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:21.380 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:21.380 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:21.380 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:21.662 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:21.662 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:21.662 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:21.662 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:21.662 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:21.662 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:21.662 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:21.662 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:21.920 [2024-07-25 11:19:37.756353] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:21.920 [2024-07-25 11:19:37.756408] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.921 [2024-07-25 11:19:37.756499] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:22.179 11:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.437 11:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:22.437 "name": "Existed_Raid", 00:08:22.437 "uuid": "8222141e-fabf-4d32-9dbc-809c4dc56c16", 00:08:22.437 "strip_size_kb": 64, 00:08:22.437 "state": "offline", 00:08:22.437 "raid_level": "concat", 00:08:22.437 "superblock": true, 00:08:22.437 "num_base_bdevs": 2, 00:08:22.437 "num_base_bdevs_discovered": 1, 00:08:22.437 "num_base_bdevs_operational": 1, 00:08:22.437 "base_bdevs_list": [ 00:08:22.437 { 00:08:22.437 "name": null, 00:08:22.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.437 "is_configured": false, 00:08:22.437 "data_offset": 2048, 00:08:22.437 "data_size": 63488 00:08:22.437 }, 00:08:22.437 { 00:08:22.437 "name": "BaseBdev2", 00:08:22.437 "uuid": "74155c05-663c-43a8-bb31-4114525ef240", 00:08:22.437 "is_configured": true, 00:08:22.437 "data_offset": 2048, 00:08:22.437 "data_size": 63488 00:08:22.437 } 00:08:22.437 ] 00:08:22.437 }' 00:08:22.437 11:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:22.437 11:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.003 11:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:23.003 11:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:23.003 11:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.003 11:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:23.262 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:23.262 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.262 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:23.521 [2024-07-25 11:19:39.304118] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.521 [2024-07-25 11:19:39.304211] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:23.780 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:23.780 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:23.780 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.780 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.038 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:24.038 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:24.038 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:24.038 11:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 64478 00:08:24.038 11:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64478 ']' 00:08:24.038 11:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64478 00:08:24.039 11:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:24.039 11:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.039 11:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64478 00:08:24.039 killing process with pid 64478 00:08:24.039 11:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.039 11:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.039 11:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64478' 00:08:24.039 11:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64478 00:08:24.039 11:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64478 00:08:24.039 [2024-07-25 11:19:39.752232] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.039 [2024-07-25 11:19:39.767535] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.414 11:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:25.414 00:08:25.414 real 0m12.509s 00:08:25.414 user 0m21.777s 00:08:25.414 sys 0m1.603s 00:08:25.414 11:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.414 ************************************ 00:08:25.414 END TEST raid_state_function_test_sb 00:08:25.414 ************************************ 00:08:25.414 11:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.414 11:19:40 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:25.414 11:19:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:25.414 11:19:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.414 11:19:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.414 ************************************ 00:08:25.414 START TEST raid_superblock_test 00:08:25.414 ************************************ 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:08:25.414 11:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=64850 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 64850 /var/tmp/spdk-raid.sock 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 64850 ']' 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.414 11:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.414 [2024-07-25 11:19:41.102834] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:25.414 [2024-07-25 11:19:41.103034] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64850 ] 00:08:25.414 [2024-07-25 11:19:41.279962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.981 [2024-07-25 11:19:41.558739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.981 [2024-07-25 11:19:41.762854] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.981 [2024-07-25 11:19:41.762907] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.240 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:26.807 malloc1 00:08:26.807 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:26.807 [2024-07-25 11:19:42.661093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:26.807 [2024-07-25 11:19:42.661189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.807 [2024-07-25 11:19:42.661219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:26.807 [2024-07-25 11:19:42.661238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.807 [2024-07-25 11:19:42.664047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.807 [2024-07-25 11:19:42.664102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:26.807 pt1 00:08:26.807 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:08:26.807 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:08:26.807 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:08:26.807 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:08:26.807 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:26.807 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.807 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.807 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.807 11:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:27.374 malloc2 00:08:27.374 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:27.374 [2024-07-25 11:19:43.222283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:27.374 [2024-07-25 11:19:43.222392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.374 [2024-07-25 11:19:43.222421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:27.374 [2024-07-25 11:19:43.222443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.374 [2024-07-25 11:19:43.225192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.374 [2024-07-25 11:19:43.225245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:27.374 pt2 00:08:27.374 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:08:27.374 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:08:27.374 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:08:27.632 [2024-07-25 11:19:43.454440] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.632 [2024-07-25 11:19:43.456898] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:27.632 [2024-07-25 11:19:43.457142] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:27.632 [2024-07-25 11:19:43.457167] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:27.632 [2024-07-25 11:19:43.457529] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:27.632 [2024-07-25 11:19:43.457782] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:27.632 [2024-07-25 11:19:43.457816] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:27.632 [2024-07-25 11:19:43.458034] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.632 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:27.632 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:27.632 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:27.632 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:27.632 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:27.632 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:27.633 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:27.633 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:27.633 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:27.633 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:27.633 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.633 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.891 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:27.891 "name": "raid_bdev1", 00:08:27.891 "uuid": "669c4830-1013-4d59-99b0-04ea9c16baf7", 00:08:27.891 "strip_size_kb": 64, 00:08:27.891 "state": "online", 00:08:27.891 "raid_level": "concat", 00:08:27.891 "superblock": true, 00:08:27.891 "num_base_bdevs": 2, 00:08:27.891 "num_base_bdevs_discovered": 2, 00:08:27.891 "num_base_bdevs_operational": 2, 00:08:27.891 "base_bdevs_list": [ 00:08:27.891 { 00:08:27.891 "name": "pt1", 00:08:27.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.891 "is_configured": true, 00:08:27.891 "data_offset": 2048, 00:08:27.891 "data_size": 63488 00:08:27.891 }, 00:08:27.891 { 00:08:27.891 "name": "pt2", 00:08:27.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.891 "is_configured": true, 00:08:27.891 "data_offset": 2048, 00:08:27.891 "data_size": 63488 00:08:27.891 } 00:08:27.891 ] 00:08:27.891 }' 00:08:27.891 11:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:27.891 11:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.826 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:08:28.826 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:28.826 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:28.826 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:28.826 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:28.826 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:28.826 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:28.827 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:28.827 [2024-07-25 11:19:44.651005] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.827 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:28.827 "name": "raid_bdev1", 00:08:28.827 "aliases": [ 00:08:28.827 "669c4830-1013-4d59-99b0-04ea9c16baf7" 00:08:28.827 ], 00:08:28.827 "product_name": "Raid Volume", 00:08:28.827 "block_size": 512, 00:08:28.827 "num_blocks": 126976, 00:08:28.827 "uuid": "669c4830-1013-4d59-99b0-04ea9c16baf7", 00:08:28.827 "assigned_rate_limits": { 00:08:28.827 "rw_ios_per_sec": 0, 00:08:28.827 "rw_mbytes_per_sec": 0, 00:08:28.827 "r_mbytes_per_sec": 0, 00:08:28.827 "w_mbytes_per_sec": 0 00:08:28.827 }, 00:08:28.827 "claimed": false, 00:08:28.827 "zoned": false, 00:08:28.827 "supported_io_types": { 00:08:28.827 "read": true, 00:08:28.827 "write": true, 00:08:28.827 "unmap": true, 00:08:28.827 "flush": true, 00:08:28.827 "reset": true, 00:08:28.827 "nvme_admin": false, 00:08:28.827 "nvme_io": false, 00:08:28.827 "nvme_io_md": false, 00:08:28.827 "write_zeroes": true, 00:08:28.827 "zcopy": false, 00:08:28.827 "get_zone_info": false, 00:08:28.827 "zone_management": false, 00:08:28.827 "zone_append": false, 00:08:28.827 "compare": false, 00:08:28.827 "compare_and_write": false, 00:08:28.827 "abort": false, 00:08:28.827 "seek_hole": false, 00:08:28.827 "seek_data": false, 00:08:28.827 "copy": false, 00:08:28.827 "nvme_iov_md": false 00:08:28.827 }, 00:08:28.827 "memory_domains": [ 00:08:28.827 { 00:08:28.827 "dma_device_id": "system", 00:08:28.827 "dma_device_type": 1 00:08:28.827 }, 00:08:28.827 { 00:08:28.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.827 "dma_device_type": 2 00:08:28.827 }, 00:08:28.827 { 00:08:28.827 "dma_device_id": "system", 00:08:28.827 "dma_device_type": 1 00:08:28.827 }, 00:08:28.827 { 00:08:28.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.827 "dma_device_type": 2 00:08:28.827 } 00:08:28.827 ], 00:08:28.827 "driver_specific": { 00:08:28.827 "raid": { 00:08:28.827 "uuid": "669c4830-1013-4d59-99b0-04ea9c16baf7", 00:08:28.827 "strip_size_kb": 64, 00:08:28.827 "state": "online", 00:08:28.827 "raid_level": "concat", 00:08:28.827 "superblock": true, 00:08:28.827 "num_base_bdevs": 2, 00:08:28.827 "num_base_bdevs_discovered": 2, 00:08:28.827 "num_base_bdevs_operational": 2, 00:08:28.827 "base_bdevs_list": [ 00:08:28.827 { 00:08:28.827 "name": "pt1", 00:08:28.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.827 "is_configured": true, 00:08:28.827 "data_offset": 2048, 00:08:28.827 "data_size": 63488 00:08:28.827 }, 00:08:28.827 { 00:08:28.827 "name": "pt2", 00:08:28.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.827 "is_configured": true, 00:08:28.827 "data_offset": 2048, 00:08:28.827 "data_size": 63488 00:08:28.827 } 00:08:28.827 ] 00:08:28.827 } 00:08:28.827 } 00:08:28.827 }' 00:08:28.827 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.085 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:29.085 pt2' 00:08:29.085 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:29.086 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:29.086 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:29.405 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:29.405 "name": "pt1", 00:08:29.405 "aliases": [ 00:08:29.405 "00000000-0000-0000-0000-000000000001" 00:08:29.405 ], 00:08:29.405 "product_name": "passthru", 00:08:29.405 "block_size": 512, 00:08:29.405 "num_blocks": 65536, 00:08:29.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.405 "assigned_rate_limits": { 00:08:29.405 "rw_ios_per_sec": 0, 00:08:29.405 "rw_mbytes_per_sec": 0, 00:08:29.405 "r_mbytes_per_sec": 0, 00:08:29.405 "w_mbytes_per_sec": 0 00:08:29.405 }, 00:08:29.405 "claimed": true, 00:08:29.405 "claim_type": "exclusive_write", 00:08:29.405 "zoned": false, 00:08:29.405 "supported_io_types": { 00:08:29.405 "read": true, 00:08:29.405 "write": true, 00:08:29.405 "unmap": true, 00:08:29.405 "flush": true, 00:08:29.405 "reset": true, 00:08:29.405 "nvme_admin": false, 00:08:29.405 "nvme_io": false, 00:08:29.405 "nvme_io_md": false, 00:08:29.405 "write_zeroes": true, 00:08:29.405 "zcopy": true, 00:08:29.405 "get_zone_info": false, 00:08:29.405 "zone_management": false, 00:08:29.405 "zone_append": false, 00:08:29.405 "compare": false, 00:08:29.405 "compare_and_write": false, 00:08:29.405 "abort": true, 00:08:29.405 "seek_hole": false, 00:08:29.405 "seek_data": false, 00:08:29.405 "copy": true, 00:08:29.405 "nvme_iov_md": false 00:08:29.405 }, 00:08:29.405 "memory_domains": [ 00:08:29.405 { 00:08:29.405 "dma_device_id": "system", 00:08:29.405 "dma_device_type": 1 00:08:29.405 }, 00:08:29.405 { 00:08:29.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.405 "dma_device_type": 2 00:08:29.405 } 00:08:29.405 ], 00:08:29.405 "driver_specific": { 00:08:29.405 "passthru": { 00:08:29.405 "name": "pt1", 00:08:29.405 "base_bdev_name": "malloc1" 00:08:29.405 } 00:08:29.405 } 00:08:29.405 }' 00:08:29.405 11:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:29.405 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:29.405 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:29.405 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:29.405 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:29.405 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:29.405 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:29.405 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:29.678 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:29.678 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:29.678 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:29.678 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:29.678 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:29.678 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:29.678 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:29.936 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:29.936 "name": "pt2", 00:08:29.936 "aliases": [ 00:08:29.936 "00000000-0000-0000-0000-000000000002" 00:08:29.936 ], 00:08:29.936 "product_name": "passthru", 00:08:29.936 "block_size": 512, 00:08:29.936 "num_blocks": 65536, 00:08:29.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.936 "assigned_rate_limits": { 00:08:29.936 "rw_ios_per_sec": 0, 00:08:29.936 "rw_mbytes_per_sec": 0, 00:08:29.936 "r_mbytes_per_sec": 0, 00:08:29.936 "w_mbytes_per_sec": 0 00:08:29.936 }, 00:08:29.936 "claimed": true, 00:08:29.936 "claim_type": "exclusive_write", 00:08:29.936 "zoned": false, 00:08:29.936 "supported_io_types": { 00:08:29.936 "read": true, 00:08:29.936 "write": true, 00:08:29.936 "unmap": true, 00:08:29.936 "flush": true, 00:08:29.936 "reset": true, 00:08:29.936 "nvme_admin": false, 00:08:29.936 "nvme_io": false, 00:08:29.936 "nvme_io_md": false, 00:08:29.936 "write_zeroes": true, 00:08:29.936 "zcopy": true, 00:08:29.936 "get_zone_info": false, 00:08:29.936 "zone_management": false, 00:08:29.936 "zone_append": false, 00:08:29.936 "compare": false, 00:08:29.936 "compare_and_write": false, 00:08:29.936 "abort": true, 00:08:29.936 "seek_hole": false, 00:08:29.936 "seek_data": false, 00:08:29.936 "copy": true, 00:08:29.936 "nvme_iov_md": false 00:08:29.936 }, 00:08:29.936 "memory_domains": [ 00:08:29.936 { 00:08:29.936 "dma_device_id": "system", 00:08:29.936 "dma_device_type": 1 00:08:29.936 }, 00:08:29.936 { 00:08:29.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.936 "dma_device_type": 2 00:08:29.936 } 00:08:29.936 ], 00:08:29.936 "driver_specific": { 00:08:29.936 "passthru": { 00:08:29.936 "name": "pt2", 00:08:29.936 "base_bdev_name": "malloc2" 00:08:29.936 } 00:08:29.936 } 00:08:29.936 }' 00:08:29.936 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:29.936 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:29.936 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:29.936 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:29.936 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:29.936 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:29.936 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:30.195 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:30.195 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:30.195 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:30.195 11:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:30.195 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:30.195 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:30.195 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:08:30.453 [2024-07-25 11:19:46.219434] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.453 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=669c4830-1013-4d59-99b0-04ea9c16baf7 00:08:30.453 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 669c4830-1013-4d59-99b0-04ea9c16baf7 ']' 00:08:30.453 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:30.712 [2024-07-25 11:19:46.455107] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.712 [2024-07-25 11:19:46.455156] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.712 [2024-07-25 11:19:46.455264] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.712 [2024-07-25 11:19:46.455339] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.712 [2024-07-25 11:19:46.455354] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:30.712 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:08:30.712 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.970 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:08:30.971 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:08:30.971 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:08:30.971 11:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:31.229 11:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:08:31.229 11:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:31.487 11:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:31.488 11:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.746 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:31.747 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:31.747 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:32.005 [2024-07-25 11:19:47.803452] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:32.005 [2024-07-25 11:19:47.805838] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:32.005 [2024-07-25 11:19:47.805941] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:32.005 [2024-07-25 11:19:47.806017] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:32.005 [2024-07-25 11:19:47.806046] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:32.005 [2024-07-25 11:19:47.806059] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:32.005 request: 00:08:32.005 { 00:08:32.005 "name": "raid_bdev1", 00:08:32.005 "raid_level": "concat", 00:08:32.005 "base_bdevs": [ 00:08:32.006 "malloc1", 00:08:32.006 "malloc2" 00:08:32.006 ], 00:08:32.006 "strip_size_kb": 64, 00:08:32.006 "superblock": false, 00:08:32.006 "method": "bdev_raid_create", 00:08:32.006 "req_id": 1 00:08:32.006 } 00:08:32.006 Got JSON-RPC error response 00:08:32.006 response: 00:08:32.006 { 00:08:32.006 "code": -17, 00:08:32.006 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:32.006 } 00:08:32.006 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:32.006 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.006 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:32.006 11:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.006 11:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.006 11:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:08:32.264 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:08:32.264 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:08:32.264 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:32.523 [2024-07-25 11:19:48.343527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:32.523 [2024-07-25 11:19:48.343614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.523 [2024-07-25 11:19:48.343662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:32.523 [2024-07-25 11:19:48.343678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.523 [2024-07-25 11:19:48.346413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.523 [2024-07-25 11:19:48.346458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:32.523 [2024-07-25 11:19:48.346574] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:32.523 [2024-07-25 11:19:48.346657] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:32.523 pt1 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.523 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.091 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:33.091 "name": "raid_bdev1", 00:08:33.091 "uuid": "669c4830-1013-4d59-99b0-04ea9c16baf7", 00:08:33.091 "strip_size_kb": 64, 00:08:33.091 "state": "configuring", 00:08:33.091 "raid_level": "concat", 00:08:33.091 "superblock": true, 00:08:33.091 "num_base_bdevs": 2, 00:08:33.091 "num_base_bdevs_discovered": 1, 00:08:33.091 "num_base_bdevs_operational": 2, 00:08:33.091 "base_bdevs_list": [ 00:08:33.091 { 00:08:33.091 "name": "pt1", 00:08:33.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.091 "is_configured": true, 00:08:33.091 "data_offset": 2048, 00:08:33.091 "data_size": 63488 00:08:33.091 }, 00:08:33.091 { 00:08:33.091 "name": null, 00:08:33.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.091 "is_configured": false, 00:08:33.091 "data_offset": 2048, 00:08:33.091 "data_size": 63488 00:08:33.091 } 00:08:33.091 ] 00:08:33.091 }' 00:08:33.091 11:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:33.091 11:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.658 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:08:33.658 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:08:33.658 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:08:33.658 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:33.916 [2024-07-25 11:19:49.563879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:33.916 [2024-07-25 11:19:49.564002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.916 [2024-07-25 11:19:49.564038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:33.916 [2024-07-25 11:19:49.564057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.916 [2024-07-25 11:19:49.564652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.916 [2024-07-25 11:19:49.564687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:33.916 [2024-07-25 11:19:49.564799] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:33.916 [2024-07-25 11:19:49.564839] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:33.916 [2024-07-25 11:19:49.565003] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:33.916 [2024-07-25 11:19:49.565018] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:33.916 [2024-07-25 11:19:49.565312] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:33.916 [2024-07-25 11:19:49.565506] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:33.916 [2024-07-25 11:19:49.565529] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:33.916 [2024-07-25 11:19:49.565701] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.916 pt2 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:33.916 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.175 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:34.175 "name": "raid_bdev1", 00:08:34.175 "uuid": "669c4830-1013-4d59-99b0-04ea9c16baf7", 00:08:34.175 "strip_size_kb": 64, 00:08:34.175 "state": "online", 00:08:34.175 "raid_level": "concat", 00:08:34.175 "superblock": true, 00:08:34.175 "num_base_bdevs": 2, 00:08:34.175 "num_base_bdevs_discovered": 2, 00:08:34.175 "num_base_bdevs_operational": 2, 00:08:34.175 "base_bdevs_list": [ 00:08:34.175 { 00:08:34.175 "name": "pt1", 00:08:34.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.175 "is_configured": true, 00:08:34.175 "data_offset": 2048, 00:08:34.175 "data_size": 63488 00:08:34.175 }, 00:08:34.175 { 00:08:34.175 "name": "pt2", 00:08:34.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.175 "is_configured": true, 00:08:34.175 "data_offset": 2048, 00:08:34.175 "data_size": 63488 00:08:34.175 } 00:08:34.175 ] 00:08:34.175 }' 00:08:34.175 11:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:34.175 11:19:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.741 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:08:34.742 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:34.742 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:34.742 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:34.742 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:34.742 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:34.742 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:34.742 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:35.001 [2024-07-25 11:19:50.700521] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.001 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:35.001 "name": "raid_bdev1", 00:08:35.001 "aliases": [ 00:08:35.001 "669c4830-1013-4d59-99b0-04ea9c16baf7" 00:08:35.001 ], 00:08:35.001 "product_name": "Raid Volume", 00:08:35.001 "block_size": 512, 00:08:35.001 "num_blocks": 126976, 00:08:35.001 "uuid": "669c4830-1013-4d59-99b0-04ea9c16baf7", 00:08:35.001 "assigned_rate_limits": { 00:08:35.001 "rw_ios_per_sec": 0, 00:08:35.001 "rw_mbytes_per_sec": 0, 00:08:35.001 "r_mbytes_per_sec": 0, 00:08:35.001 "w_mbytes_per_sec": 0 00:08:35.001 }, 00:08:35.001 "claimed": false, 00:08:35.001 "zoned": false, 00:08:35.001 "supported_io_types": { 00:08:35.001 "read": true, 00:08:35.001 "write": true, 00:08:35.001 "unmap": true, 00:08:35.001 "flush": true, 00:08:35.001 "reset": true, 00:08:35.001 "nvme_admin": false, 00:08:35.001 "nvme_io": false, 00:08:35.001 "nvme_io_md": false, 00:08:35.001 "write_zeroes": true, 00:08:35.001 "zcopy": false, 00:08:35.001 "get_zone_info": false, 00:08:35.001 "zone_management": false, 00:08:35.001 "zone_append": false, 00:08:35.001 "compare": false, 00:08:35.001 "compare_and_write": false, 00:08:35.001 "abort": false, 00:08:35.001 "seek_hole": false, 00:08:35.001 "seek_data": false, 00:08:35.001 "copy": false, 00:08:35.001 "nvme_iov_md": false 00:08:35.001 }, 00:08:35.001 "memory_domains": [ 00:08:35.001 { 00:08:35.001 "dma_device_id": "system", 00:08:35.001 "dma_device_type": 1 00:08:35.001 }, 00:08:35.001 { 00:08:35.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.001 "dma_device_type": 2 00:08:35.001 }, 00:08:35.001 { 00:08:35.001 "dma_device_id": "system", 00:08:35.001 "dma_device_type": 1 00:08:35.001 }, 00:08:35.001 { 00:08:35.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.001 "dma_device_type": 2 00:08:35.001 } 00:08:35.001 ], 00:08:35.001 "driver_specific": { 00:08:35.001 "raid": { 00:08:35.001 "uuid": "669c4830-1013-4d59-99b0-04ea9c16baf7", 00:08:35.001 "strip_size_kb": 64, 00:08:35.001 "state": "online", 00:08:35.001 "raid_level": "concat", 00:08:35.001 "superblock": true, 00:08:35.001 "num_base_bdevs": 2, 00:08:35.001 "num_base_bdevs_discovered": 2, 00:08:35.001 "num_base_bdevs_operational": 2, 00:08:35.001 "base_bdevs_list": [ 00:08:35.001 { 00:08:35.001 "name": "pt1", 00:08:35.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.002 "is_configured": true, 00:08:35.002 "data_offset": 2048, 00:08:35.002 "data_size": 63488 00:08:35.002 }, 00:08:35.002 { 00:08:35.002 "name": "pt2", 00:08:35.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.002 "is_configured": true, 00:08:35.002 "data_offset": 2048, 00:08:35.002 "data_size": 63488 00:08:35.002 } 00:08:35.002 ] 00:08:35.002 } 00:08:35.002 } 00:08:35.002 }' 00:08:35.002 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.002 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:35.002 pt2' 00:08:35.002 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:35.002 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:35.002 11:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:35.261 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:35.261 "name": "pt1", 00:08:35.261 "aliases": [ 00:08:35.261 "00000000-0000-0000-0000-000000000001" 00:08:35.261 ], 00:08:35.261 "product_name": "passthru", 00:08:35.261 "block_size": 512, 00:08:35.261 "num_blocks": 65536, 00:08:35.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.261 "assigned_rate_limits": { 00:08:35.261 "rw_ios_per_sec": 0, 00:08:35.261 "rw_mbytes_per_sec": 0, 00:08:35.261 "r_mbytes_per_sec": 0, 00:08:35.261 "w_mbytes_per_sec": 0 00:08:35.261 }, 00:08:35.261 "claimed": true, 00:08:35.261 "claim_type": "exclusive_write", 00:08:35.261 "zoned": false, 00:08:35.261 "supported_io_types": { 00:08:35.261 "read": true, 00:08:35.261 "write": true, 00:08:35.261 "unmap": true, 00:08:35.261 "flush": true, 00:08:35.261 "reset": true, 00:08:35.261 "nvme_admin": false, 00:08:35.261 "nvme_io": false, 00:08:35.261 "nvme_io_md": false, 00:08:35.261 "write_zeroes": true, 00:08:35.261 "zcopy": true, 00:08:35.261 "get_zone_info": false, 00:08:35.261 "zone_management": false, 00:08:35.261 "zone_append": false, 00:08:35.261 "compare": false, 00:08:35.261 "compare_and_write": false, 00:08:35.261 "abort": true, 00:08:35.261 "seek_hole": false, 00:08:35.261 "seek_data": false, 00:08:35.261 "copy": true, 00:08:35.261 "nvme_iov_md": false 00:08:35.261 }, 00:08:35.261 "memory_domains": [ 00:08:35.261 { 00:08:35.261 "dma_device_id": "system", 00:08:35.261 "dma_device_type": 1 00:08:35.261 }, 00:08:35.261 { 00:08:35.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.261 "dma_device_type": 2 00:08:35.261 } 00:08:35.261 ], 00:08:35.261 "driver_specific": { 00:08:35.261 "passthru": { 00:08:35.261 "name": "pt1", 00:08:35.261 "base_bdev_name": "malloc1" 00:08:35.261 } 00:08:35.261 } 00:08:35.261 }' 00:08:35.261 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:35.261 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:35.261 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:35.261 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:35.520 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:35.778 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:35.778 "name": "pt2", 00:08:35.778 "aliases": [ 00:08:35.778 "00000000-0000-0000-0000-000000000002" 00:08:35.778 ], 00:08:35.778 "product_name": "passthru", 00:08:35.778 "block_size": 512, 00:08:35.778 "num_blocks": 65536, 00:08:35.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.778 "assigned_rate_limits": { 00:08:35.778 "rw_ios_per_sec": 0, 00:08:35.778 "rw_mbytes_per_sec": 0, 00:08:35.778 "r_mbytes_per_sec": 0, 00:08:35.778 "w_mbytes_per_sec": 0 00:08:35.778 }, 00:08:35.778 "claimed": true, 00:08:35.778 "claim_type": "exclusive_write", 00:08:35.778 "zoned": false, 00:08:35.778 "supported_io_types": { 00:08:35.778 "read": true, 00:08:35.778 "write": true, 00:08:35.778 "unmap": true, 00:08:35.778 "flush": true, 00:08:35.778 "reset": true, 00:08:35.778 "nvme_admin": false, 00:08:35.778 "nvme_io": false, 00:08:35.778 "nvme_io_md": false, 00:08:35.778 "write_zeroes": true, 00:08:35.778 "zcopy": true, 00:08:35.778 "get_zone_info": false, 00:08:35.778 "zone_management": false, 00:08:35.778 "zone_append": false, 00:08:35.778 "compare": false, 00:08:35.778 "compare_and_write": false, 00:08:35.778 "abort": true, 00:08:35.779 "seek_hole": false, 00:08:35.779 "seek_data": false, 00:08:35.779 "copy": true, 00:08:35.779 "nvme_iov_md": false 00:08:35.779 }, 00:08:35.779 "memory_domains": [ 00:08:35.779 { 00:08:35.779 "dma_device_id": "system", 00:08:35.779 "dma_device_type": 1 00:08:35.779 }, 00:08:35.779 { 00:08:35.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.779 "dma_device_type": 2 00:08:35.779 } 00:08:35.779 ], 00:08:35.779 "driver_specific": { 00:08:35.779 "passthru": { 00:08:35.779 "name": "pt2", 00:08:35.779 "base_bdev_name": "malloc2" 00:08:35.779 } 00:08:35.779 } 00:08:35.779 }' 00:08:35.779 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:36.038 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:36.038 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:36.038 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:36.038 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:36.038 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:36.038 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:36.038 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:36.038 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:36.038 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:36.296 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:36.296 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:36.296 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:36.296 11:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:08:36.555 [2024-07-25 11:19:52.201005] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 669c4830-1013-4d59-99b0-04ea9c16baf7 '!=' 669c4830-1013-4d59-99b0-04ea9c16baf7 ']' 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 64850 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 64850 ']' 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 64850 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64850 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.555 killing process with pid 64850 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64850' 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 64850 00:08:36.555 [2024-07-25 11:19:52.252506] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.555 [2024-07-25 11:19:52.252646] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.555 [2024-07-25 11:19:52.252720] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.555 [2024-07-25 11:19:52.252735] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:36.555 11:19:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 64850 00:08:36.555 [2024-07-25 11:19:52.435555] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.930 11:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:08:37.930 00:08:37.930 real 0m12.596s 00:08:37.930 user 0m21.979s 00:08:37.930 sys 0m1.631s 00:08:37.930 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.930 11:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.930 ************************************ 00:08:37.930 END TEST raid_superblock_test 00:08:37.930 ************************************ 00:08:37.930 11:19:53 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:37.930 11:19:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:37.930 11:19:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.930 11:19:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.930 ************************************ 00:08:37.930 START TEST raid_read_error_test 00:08:37.930 ************************************ 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.HRCE6GWKzx 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=65222 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 65222 /var/tmp/spdk-raid.sock 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65222 ']' 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.930 11:19:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.931 [2024-07-25 11:19:53.743481] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:37.931 [2024-07-25 11:19:53.743702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65222 ] 00:08:38.187 [2024-07-25 11:19:53.908425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.444 [2024-07-25 11:19:54.145476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.702 [2024-07-25 11:19:54.345790] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.702 [2024-07-25 11:19:54.345872] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.959 11:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.960 11:19:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:38.960 11:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:38.960 11:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:39.218 BaseBdev1_malloc 00:08:39.218 11:19:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:39.476 true 00:08:39.476 11:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:39.735 [2024-07-25 11:19:55.445785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:39.735 [2024-07-25 11:19:55.445880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.735 [2024-07-25 11:19:55.445918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:39.735 [2024-07-25 11:19:55.445935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.735 [2024-07-25 11:19:55.448770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.735 [2024-07-25 11:19:55.448819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:39.735 BaseBdev1 00:08:39.735 11:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:39.735 11:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.993 BaseBdev2_malloc 00:08:39.993 11:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:40.251 true 00:08:40.251 11:19:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:40.509 [2024-07-25 11:19:56.194840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:40.509 [2024-07-25 11:19:56.194964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.509 [2024-07-25 11:19:56.195015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:40.509 [2024-07-25 11:19:56.195033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.509 [2024-07-25 11:19:56.198199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.509 [2024-07-25 11:19:56.198243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:40.509 BaseBdev2 00:08:40.509 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:40.767 [2024-07-25 11:19:56.423274] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.767 [2024-07-25 11:19:56.426199] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.767 [2024-07-25 11:19:56.426520] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:40.767 [2024-07-25 11:19:56.426569] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:40.767 [2024-07-25 11:19:56.426973] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:40.767 [2024-07-25 11:19:56.427272] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:40.767 [2024-07-25 11:19:56.427300] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:40.767 [2024-07-25 11:19:56.427640] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.767 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.025 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:41.025 "name": "raid_bdev1", 00:08:41.025 "uuid": "ec45b0f7-4eb1-45ff-a44c-e3b66b047754", 00:08:41.025 "strip_size_kb": 64, 00:08:41.025 "state": "online", 00:08:41.025 "raid_level": "concat", 00:08:41.025 "superblock": true, 00:08:41.025 "num_base_bdevs": 2, 00:08:41.025 "num_base_bdevs_discovered": 2, 00:08:41.025 "num_base_bdevs_operational": 2, 00:08:41.025 "base_bdevs_list": [ 00:08:41.025 { 00:08:41.025 "name": "BaseBdev1", 00:08:41.026 "uuid": "254391c5-7144-5619-be47-248703f5922e", 00:08:41.026 "is_configured": true, 00:08:41.026 "data_offset": 2048, 00:08:41.026 "data_size": 63488 00:08:41.026 }, 00:08:41.026 { 00:08:41.026 "name": "BaseBdev2", 00:08:41.026 "uuid": "412c4701-fde6-5ded-a74b-84629c55d83d", 00:08:41.026 "is_configured": true, 00:08:41.026 "data_offset": 2048, 00:08:41.026 "data_size": 63488 00:08:41.026 } 00:08:41.026 ] 00:08:41.026 }' 00:08:41.026 11:19:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:41.026 11:19:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.596 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:08:41.596 11:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:41.596 [2024-07-25 11:19:57.441535] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:42.531 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.790 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:43.048 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:43.048 "name": "raid_bdev1", 00:08:43.048 "uuid": "ec45b0f7-4eb1-45ff-a44c-e3b66b047754", 00:08:43.048 "strip_size_kb": 64, 00:08:43.048 "state": "online", 00:08:43.048 "raid_level": "concat", 00:08:43.048 "superblock": true, 00:08:43.048 "num_base_bdevs": 2, 00:08:43.048 "num_base_bdevs_discovered": 2, 00:08:43.048 "num_base_bdevs_operational": 2, 00:08:43.048 "base_bdevs_list": [ 00:08:43.048 { 00:08:43.048 "name": "BaseBdev1", 00:08:43.048 "uuid": "254391c5-7144-5619-be47-248703f5922e", 00:08:43.048 "is_configured": true, 00:08:43.048 "data_offset": 2048, 00:08:43.048 "data_size": 63488 00:08:43.048 }, 00:08:43.048 { 00:08:43.048 "name": "BaseBdev2", 00:08:43.048 "uuid": "412c4701-fde6-5ded-a74b-84629c55d83d", 00:08:43.048 "is_configured": true, 00:08:43.048 "data_offset": 2048, 00:08:43.048 "data_size": 63488 00:08:43.048 } 00:08:43.048 ] 00:08:43.048 }' 00:08:43.048 11:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:43.048 11:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.982 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:43.982 [2024-07-25 11:19:59.834220] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.982 [2024-07-25 11:19:59.834269] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.982 [2024-07-25 11:19:59.837446] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.982 [2024-07-25 11:19:59.837504] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.982 [2024-07-25 11:19:59.837554] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.982 [2024-07-25 11:19:59.837569] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:43.982 0 00:08:43.982 11:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 65222 00:08:43.982 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65222 ']' 00:08:43.982 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65222 00:08:43.982 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:43.982 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.240 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65222 00:08:44.240 killing process with pid 65222 00:08:44.240 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.240 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.240 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65222' 00:08:44.240 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65222 00:08:44.240 [2024-07-25 11:19:59.885102] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.241 11:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65222 00:08:44.241 [2024-07-25 11:20:00.006479] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.615 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.HRCE6GWKzx 00:08:45.615 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:08:45.615 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:08:45.615 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.42 00:08:45.615 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:08:45.615 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:45.615 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:45.615 11:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.42 != \0\.\0\0 ]] 00:08:45.615 00:08:45.615 real 0m7.589s 00:08:45.615 user 0m11.317s 00:08:45.615 sys 0m0.962s 00:08:45.615 ************************************ 00:08:45.615 END TEST raid_read_error_test 00:08:45.615 ************************************ 00:08:45.615 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.615 11:20:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.615 11:20:01 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:45.615 11:20:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:45.615 11:20:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.615 11:20:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.615 ************************************ 00:08:45.615 START TEST raid_write_error_test 00:08:45.616 ************************************ 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.iNbrEU2wJO 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=65404 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 65404 /var/tmp/spdk-raid.sock 00:08:45.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65404 ']' 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.616 11:20:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.616 [2024-07-25 11:20:01.393050] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:45.616 [2024-07-25 11:20:01.393217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65404 ] 00:08:45.873 [2024-07-25 11:20:01.557981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.130 [2024-07-25 11:20:01.884176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.387 [2024-07-25 11:20:02.092844] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.387 [2024-07-25 11:20:02.092887] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.645 11:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.645 11:20:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:46.645 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:46.645 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:46.902 BaseBdev1_malloc 00:08:46.902 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:47.160 true 00:08:47.160 11:20:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:47.417 [2024-07-25 11:20:03.122760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:47.417 [2024-07-25 11:20:03.122851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.417 [2024-07-25 11:20:03.122889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:47.417 [2024-07-25 11:20:03.122905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.417 [2024-07-25 11:20:03.125715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.417 [2024-07-25 11:20:03.125761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:47.417 BaseBdev1 00:08:47.417 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:08:47.417 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:47.675 BaseBdev2_malloc 00:08:47.675 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:47.933 true 00:08:47.933 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.191 [2024-07-25 11:20:03.942731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.191 [2024-07-25 11:20:03.942820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.191 [2024-07-25 11:20:03.942859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:48.191 [2024-07-25 11:20:03.942876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.191 [2024-07-25 11:20:03.945643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.191 [2024-07-25 11:20:03.945686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.191 BaseBdev2 00:08:48.191 11:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:48.449 [2024-07-25 11:20:04.174889] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.449 [2024-07-25 11:20:04.177257] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.449 [2024-07-25 11:20:04.177548] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.449 [2024-07-25 11:20:04.177569] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:48.449 [2024-07-25 11:20:04.177931] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:48.449 [2024-07-25 11:20:04.178168] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.449 [2024-07-25 11:20:04.178196] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:48.449 [2024-07-25 11:20:04.178412] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.449 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.708 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:48.708 "name": "raid_bdev1", 00:08:48.708 "uuid": "604d2cdc-ca3d-4f7e-a24d-07d2575051e6", 00:08:48.708 "strip_size_kb": 64, 00:08:48.708 "state": "online", 00:08:48.708 "raid_level": "concat", 00:08:48.708 "superblock": true, 00:08:48.708 "num_base_bdevs": 2, 00:08:48.708 "num_base_bdevs_discovered": 2, 00:08:48.708 "num_base_bdevs_operational": 2, 00:08:48.708 "base_bdevs_list": [ 00:08:48.708 { 00:08:48.708 "name": "BaseBdev1", 00:08:48.708 "uuid": "852d0f2c-37b1-5d0c-8540-504c6eedf804", 00:08:48.708 "is_configured": true, 00:08:48.708 "data_offset": 2048, 00:08:48.708 "data_size": 63488 00:08:48.708 }, 00:08:48.708 { 00:08:48.708 "name": "BaseBdev2", 00:08:48.708 "uuid": "03a0c75f-b079-56f7-8ad1-c892069d9bdf", 00:08:48.708 "is_configured": true, 00:08:48.708 "data_offset": 2048, 00:08:48.708 "data_size": 63488 00:08:48.708 } 00:08:48.708 ] 00:08:48.708 }' 00:08:48.708 11:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:48.708 11:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.643 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:08:49.643 11:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:49.643 [2024-07-25 11:20:05.280508] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.578 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.144 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:51.144 "name": "raid_bdev1", 00:08:51.144 "uuid": "604d2cdc-ca3d-4f7e-a24d-07d2575051e6", 00:08:51.144 "strip_size_kb": 64, 00:08:51.144 "state": "online", 00:08:51.144 "raid_level": "concat", 00:08:51.144 "superblock": true, 00:08:51.144 "num_base_bdevs": 2, 00:08:51.144 "num_base_bdevs_discovered": 2, 00:08:51.144 "num_base_bdevs_operational": 2, 00:08:51.144 "base_bdevs_list": [ 00:08:51.144 { 00:08:51.144 "name": "BaseBdev1", 00:08:51.144 "uuid": "852d0f2c-37b1-5d0c-8540-504c6eedf804", 00:08:51.144 "is_configured": true, 00:08:51.144 "data_offset": 2048, 00:08:51.144 "data_size": 63488 00:08:51.144 }, 00:08:51.144 { 00:08:51.144 "name": "BaseBdev2", 00:08:51.144 "uuid": "03a0c75f-b079-56f7-8ad1-c892069d9bdf", 00:08:51.144 "is_configured": true, 00:08:51.144 "data_offset": 2048, 00:08:51.144 "data_size": 63488 00:08:51.144 } 00:08:51.144 ] 00:08:51.144 }' 00:08:51.144 11:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:51.144 11:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.711 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:51.969 [2024-07-25 11:20:07.637492] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.969 [2024-07-25 11:20:07.637538] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.969 0 00:08:51.969 [2024-07-25 11:20:07.640676] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.969 [2024-07-25 11:20:07.640732] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.969 [2024-07-25 11:20:07.640780] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.969 [2024-07-25 11:20:07.640804] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 65404 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65404 ']' 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65404 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65404 00:08:51.969 killing process with pid 65404 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65404' 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65404 00:08:51.969 11:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65404 00:08:51.969 [2024-07-25 11:20:07.676296] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.969 [2024-07-25 11:20:07.797399] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.343 11:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:08:53.343 11:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.iNbrEU2wJO 00:08:53.343 11:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:08:53.343 11:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.42 00:08:53.343 11:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:08:53.343 11:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:53.343 11:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:53.343 11:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.42 != \0\.\0\0 ]] 00:08:53.343 00:08:53.343 real 0m7.738s 00:08:53.343 user 0m11.738s 00:08:53.343 sys 0m0.908s 00:08:53.343 11:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.343 11:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.343 ************************************ 00:08:53.343 END TEST raid_write_error_test 00:08:53.343 ************************************ 00:08:53.343 11:20:09 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:08:53.343 11:20:09 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:53.343 11:20:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:53.343 11:20:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.343 11:20:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.343 ************************************ 00:08:53.343 START TEST raid_state_function_test 00:08:53.343 ************************************ 00:08:53.343 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:08:53.343 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:53.343 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=65591 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65591' 00:08:53.344 Process raid pid: 65591 00:08:53.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 65591 /var/tmp/spdk-raid.sock 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65591 ']' 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.344 11:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.344 [2024-07-25 11:20:09.151712] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:08:53.344 [2024-07-25 11:20:09.152089] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.602 [2024-07-25 11:20:09.317116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.861 [2024-07-25 11:20:09.555178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.119 [2024-07-25 11:20:09.758668] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.119 [2024-07-25 11:20:09.758714] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.377 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.377 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:54.377 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:54.635 [2024-07-25 11:20:10.280137] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.635 [2024-07-25 11:20:10.280215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.635 [2024-07-25 11:20:10.280236] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.635 [2024-07-25 11:20:10.280250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.635 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.894 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:54.894 "name": "Existed_Raid", 00:08:54.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.894 "strip_size_kb": 0, 00:08:54.894 "state": "configuring", 00:08:54.894 "raid_level": "raid1", 00:08:54.894 "superblock": false, 00:08:54.894 "num_base_bdevs": 2, 00:08:54.894 "num_base_bdevs_discovered": 0, 00:08:54.894 "num_base_bdevs_operational": 2, 00:08:54.894 "base_bdevs_list": [ 00:08:54.894 { 00:08:54.894 "name": "BaseBdev1", 00:08:54.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.894 "is_configured": false, 00:08:54.894 "data_offset": 0, 00:08:54.894 "data_size": 0 00:08:54.894 }, 00:08:54.894 { 00:08:54.894 "name": "BaseBdev2", 00:08:54.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.894 "is_configured": false, 00:08:54.894 "data_offset": 0, 00:08:54.894 "data_size": 0 00:08:54.894 } 00:08:54.894 ] 00:08:54.894 }' 00:08:54.894 11:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:54.894 11:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.460 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:55.719 [2024-07-25 11:20:11.468323] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.719 [2024-07-25 11:20:11.468377] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:55.719 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:55.977 [2024-07-25 11:20:11.752433] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.977 [2024-07-25 11:20:11.752507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.977 [2024-07-25 11:20:11.752530] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.977 [2024-07-25 11:20:11.752544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.977 11:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:56.235 [2024-07-25 11:20:12.041138] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.235 BaseBdev1 00:08:56.235 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:56.235 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:56.235 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.235 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:56.235 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.235 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.235 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:56.492 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.059 [ 00:08:57.059 { 00:08:57.059 "name": "BaseBdev1", 00:08:57.059 "aliases": [ 00:08:57.059 "00edaf60-8074-43a3-8b07-cf62c4826bb4" 00:08:57.059 ], 00:08:57.059 "product_name": "Malloc disk", 00:08:57.059 "block_size": 512, 00:08:57.059 "num_blocks": 65536, 00:08:57.059 "uuid": "00edaf60-8074-43a3-8b07-cf62c4826bb4", 00:08:57.059 "assigned_rate_limits": { 00:08:57.059 "rw_ios_per_sec": 0, 00:08:57.059 "rw_mbytes_per_sec": 0, 00:08:57.059 "r_mbytes_per_sec": 0, 00:08:57.059 "w_mbytes_per_sec": 0 00:08:57.059 }, 00:08:57.059 "claimed": true, 00:08:57.059 "claim_type": "exclusive_write", 00:08:57.059 "zoned": false, 00:08:57.059 "supported_io_types": { 00:08:57.059 "read": true, 00:08:57.059 "write": true, 00:08:57.059 "unmap": true, 00:08:57.059 "flush": true, 00:08:57.059 "reset": true, 00:08:57.059 "nvme_admin": false, 00:08:57.059 "nvme_io": false, 00:08:57.059 "nvme_io_md": false, 00:08:57.059 "write_zeroes": true, 00:08:57.059 "zcopy": true, 00:08:57.059 "get_zone_info": false, 00:08:57.059 "zone_management": false, 00:08:57.059 "zone_append": false, 00:08:57.059 "compare": false, 00:08:57.059 "compare_and_write": false, 00:08:57.059 "abort": true, 00:08:57.059 "seek_hole": false, 00:08:57.059 "seek_data": false, 00:08:57.059 "copy": true, 00:08:57.059 "nvme_iov_md": false 00:08:57.059 }, 00:08:57.059 "memory_domains": [ 00:08:57.059 { 00:08:57.059 "dma_device_id": "system", 00:08:57.059 "dma_device_type": 1 00:08:57.059 }, 00:08:57.059 { 00:08:57.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.059 "dma_device_type": 2 00:08:57.059 } 00:08:57.059 ], 00:08:57.059 "driver_specific": {} 00:08:57.059 } 00:08:57.059 ] 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:57.059 "name": "Existed_Raid", 00:08:57.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.059 "strip_size_kb": 0, 00:08:57.059 "state": "configuring", 00:08:57.059 "raid_level": "raid1", 00:08:57.059 "superblock": false, 00:08:57.059 "num_base_bdevs": 2, 00:08:57.059 "num_base_bdevs_discovered": 1, 00:08:57.059 "num_base_bdevs_operational": 2, 00:08:57.059 "base_bdevs_list": [ 00:08:57.059 { 00:08:57.059 "name": "BaseBdev1", 00:08:57.059 "uuid": "00edaf60-8074-43a3-8b07-cf62c4826bb4", 00:08:57.059 "is_configured": true, 00:08:57.059 "data_offset": 0, 00:08:57.059 "data_size": 65536 00:08:57.059 }, 00:08:57.059 { 00:08:57.059 "name": "BaseBdev2", 00:08:57.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.059 "is_configured": false, 00:08:57.059 "data_offset": 0, 00:08:57.059 "data_size": 0 00:08:57.059 } 00:08:57.059 ] 00:08:57.059 }' 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:57.059 11:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.687 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:57.945 [2024-07-25 11:20:13.757678] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.945 [2024-07-25 11:20:13.757754] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:57.945 11:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:58.513 [2024-07-25 11:20:14.109804] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.513 [2024-07-25 11:20:14.112189] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.513 [2024-07-25 11:20:14.112242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:58.513 "name": "Existed_Raid", 00:08:58.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.513 "strip_size_kb": 0, 00:08:58.513 "state": "configuring", 00:08:58.513 "raid_level": "raid1", 00:08:58.513 "superblock": false, 00:08:58.513 "num_base_bdevs": 2, 00:08:58.513 "num_base_bdevs_discovered": 1, 00:08:58.513 "num_base_bdevs_operational": 2, 00:08:58.513 "base_bdevs_list": [ 00:08:58.513 { 00:08:58.513 "name": "BaseBdev1", 00:08:58.513 "uuid": "00edaf60-8074-43a3-8b07-cf62c4826bb4", 00:08:58.513 "is_configured": true, 00:08:58.513 "data_offset": 0, 00:08:58.513 "data_size": 65536 00:08:58.513 }, 00:08:58.513 { 00:08:58.513 "name": "BaseBdev2", 00:08:58.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.513 "is_configured": false, 00:08:58.513 "data_offset": 0, 00:08:58.513 "data_size": 0 00:08:58.513 } 00:08:58.513 ] 00:08:58.513 }' 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:58.513 11:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.448 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.448 [2024-07-25 11:20:15.323500] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.448 [2024-07-25 11:20:15.323566] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.448 [2024-07-25 11:20:15.323584] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:59.448 [2024-07-25 11:20:15.323951] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:59.448 [2024-07-25 11:20:15.324166] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.448 [2024-07-25 11:20:15.324183] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:59.448 [2024-07-25 11:20:15.324501] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.448 BaseBdev2 00:08:59.706 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:59.706 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:59.706 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.706 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.706 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.706 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.706 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:59.964 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.223 [ 00:09:00.223 { 00:09:00.223 "name": "BaseBdev2", 00:09:00.223 "aliases": [ 00:09:00.223 "cc9a31ed-b351-4c85-b525-80594631dcec" 00:09:00.223 ], 00:09:00.223 "product_name": "Malloc disk", 00:09:00.223 "block_size": 512, 00:09:00.223 "num_blocks": 65536, 00:09:00.223 "uuid": "cc9a31ed-b351-4c85-b525-80594631dcec", 00:09:00.223 "assigned_rate_limits": { 00:09:00.223 "rw_ios_per_sec": 0, 00:09:00.223 "rw_mbytes_per_sec": 0, 00:09:00.223 "r_mbytes_per_sec": 0, 00:09:00.223 "w_mbytes_per_sec": 0 00:09:00.223 }, 00:09:00.223 "claimed": true, 00:09:00.223 "claim_type": "exclusive_write", 00:09:00.223 "zoned": false, 00:09:00.223 "supported_io_types": { 00:09:00.223 "read": true, 00:09:00.223 "write": true, 00:09:00.223 "unmap": true, 00:09:00.223 "flush": true, 00:09:00.223 "reset": true, 00:09:00.223 "nvme_admin": false, 00:09:00.223 "nvme_io": false, 00:09:00.223 "nvme_io_md": false, 00:09:00.223 "write_zeroes": true, 00:09:00.223 "zcopy": true, 00:09:00.223 "get_zone_info": false, 00:09:00.223 "zone_management": false, 00:09:00.223 "zone_append": false, 00:09:00.223 "compare": false, 00:09:00.223 "compare_and_write": false, 00:09:00.223 "abort": true, 00:09:00.223 "seek_hole": false, 00:09:00.223 "seek_data": false, 00:09:00.223 "copy": true, 00:09:00.223 "nvme_iov_md": false 00:09:00.223 }, 00:09:00.223 "memory_domains": [ 00:09:00.223 { 00:09:00.223 "dma_device_id": "system", 00:09:00.223 "dma_device_type": 1 00:09:00.223 }, 00:09:00.223 { 00:09:00.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.223 "dma_device_type": 2 00:09:00.223 } 00:09:00.223 ], 00:09:00.223 "driver_specific": {} 00:09:00.223 } 00:09:00.223 ] 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.223 11:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.482 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:00.482 "name": "Existed_Raid", 00:09:00.482 "uuid": "b030db89-717a-429c-bae9-634b32e9c523", 00:09:00.482 "strip_size_kb": 0, 00:09:00.482 "state": "online", 00:09:00.482 "raid_level": "raid1", 00:09:00.482 "superblock": false, 00:09:00.482 "num_base_bdevs": 2, 00:09:00.482 "num_base_bdevs_discovered": 2, 00:09:00.482 "num_base_bdevs_operational": 2, 00:09:00.482 "base_bdevs_list": [ 00:09:00.482 { 00:09:00.482 "name": "BaseBdev1", 00:09:00.482 "uuid": "00edaf60-8074-43a3-8b07-cf62c4826bb4", 00:09:00.482 "is_configured": true, 00:09:00.482 "data_offset": 0, 00:09:00.482 "data_size": 65536 00:09:00.482 }, 00:09:00.482 { 00:09:00.482 "name": "BaseBdev2", 00:09:00.482 "uuid": "cc9a31ed-b351-4c85-b525-80594631dcec", 00:09:00.482 "is_configured": true, 00:09:00.482 "data_offset": 0, 00:09:00.482 "data_size": 65536 00:09:00.482 } 00:09:00.482 ] 00:09:00.482 }' 00:09:00.482 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:00.482 11:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.048 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.048 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:01.048 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:01.048 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:01.048 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:01.048 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:01.048 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:01.048 11:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:01.314 [2024-07-25 11:20:17.136438] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.314 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:01.314 "name": "Existed_Raid", 00:09:01.314 "aliases": [ 00:09:01.314 "b030db89-717a-429c-bae9-634b32e9c523" 00:09:01.315 ], 00:09:01.315 "product_name": "Raid Volume", 00:09:01.315 "block_size": 512, 00:09:01.315 "num_blocks": 65536, 00:09:01.315 "uuid": "b030db89-717a-429c-bae9-634b32e9c523", 00:09:01.315 "assigned_rate_limits": { 00:09:01.315 "rw_ios_per_sec": 0, 00:09:01.315 "rw_mbytes_per_sec": 0, 00:09:01.315 "r_mbytes_per_sec": 0, 00:09:01.315 "w_mbytes_per_sec": 0 00:09:01.315 }, 00:09:01.315 "claimed": false, 00:09:01.315 "zoned": false, 00:09:01.315 "supported_io_types": { 00:09:01.315 "read": true, 00:09:01.315 "write": true, 00:09:01.315 "unmap": false, 00:09:01.315 "flush": false, 00:09:01.315 "reset": true, 00:09:01.315 "nvme_admin": false, 00:09:01.315 "nvme_io": false, 00:09:01.315 "nvme_io_md": false, 00:09:01.315 "write_zeroes": true, 00:09:01.315 "zcopy": false, 00:09:01.315 "get_zone_info": false, 00:09:01.315 "zone_management": false, 00:09:01.315 "zone_append": false, 00:09:01.315 "compare": false, 00:09:01.315 "compare_and_write": false, 00:09:01.315 "abort": false, 00:09:01.315 "seek_hole": false, 00:09:01.315 "seek_data": false, 00:09:01.315 "copy": false, 00:09:01.315 "nvme_iov_md": false 00:09:01.315 }, 00:09:01.315 "memory_domains": [ 00:09:01.315 { 00:09:01.315 "dma_device_id": "system", 00:09:01.315 "dma_device_type": 1 00:09:01.315 }, 00:09:01.315 { 00:09:01.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.315 "dma_device_type": 2 00:09:01.315 }, 00:09:01.315 { 00:09:01.315 "dma_device_id": "system", 00:09:01.315 "dma_device_type": 1 00:09:01.315 }, 00:09:01.315 { 00:09:01.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.315 "dma_device_type": 2 00:09:01.315 } 00:09:01.315 ], 00:09:01.315 "driver_specific": { 00:09:01.315 "raid": { 00:09:01.315 "uuid": "b030db89-717a-429c-bae9-634b32e9c523", 00:09:01.315 "strip_size_kb": 0, 00:09:01.315 "state": "online", 00:09:01.315 "raid_level": "raid1", 00:09:01.315 "superblock": false, 00:09:01.315 "num_base_bdevs": 2, 00:09:01.315 "num_base_bdevs_discovered": 2, 00:09:01.315 "num_base_bdevs_operational": 2, 00:09:01.315 "base_bdevs_list": [ 00:09:01.315 { 00:09:01.315 "name": "BaseBdev1", 00:09:01.315 "uuid": "00edaf60-8074-43a3-8b07-cf62c4826bb4", 00:09:01.315 "is_configured": true, 00:09:01.315 "data_offset": 0, 00:09:01.315 "data_size": 65536 00:09:01.315 }, 00:09:01.315 { 00:09:01.315 "name": "BaseBdev2", 00:09:01.315 "uuid": "cc9a31ed-b351-4c85-b525-80594631dcec", 00:09:01.315 "is_configured": true, 00:09:01.315 "data_offset": 0, 00:09:01.315 "data_size": 65536 00:09:01.315 } 00:09:01.315 ] 00:09:01.315 } 00:09:01.315 } 00:09:01.315 }' 00:09:01.315 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.618 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:01.618 BaseBdev2' 00:09:01.618 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:01.618 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:01.618 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:01.618 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:01.618 "name": "BaseBdev1", 00:09:01.618 "aliases": [ 00:09:01.618 "00edaf60-8074-43a3-8b07-cf62c4826bb4" 00:09:01.618 ], 00:09:01.618 "product_name": "Malloc disk", 00:09:01.618 "block_size": 512, 00:09:01.618 "num_blocks": 65536, 00:09:01.618 "uuid": "00edaf60-8074-43a3-8b07-cf62c4826bb4", 00:09:01.618 "assigned_rate_limits": { 00:09:01.618 "rw_ios_per_sec": 0, 00:09:01.618 "rw_mbytes_per_sec": 0, 00:09:01.618 "r_mbytes_per_sec": 0, 00:09:01.618 "w_mbytes_per_sec": 0 00:09:01.618 }, 00:09:01.618 "claimed": true, 00:09:01.618 "claim_type": "exclusive_write", 00:09:01.618 "zoned": false, 00:09:01.618 "supported_io_types": { 00:09:01.618 "read": true, 00:09:01.618 "write": true, 00:09:01.618 "unmap": true, 00:09:01.618 "flush": true, 00:09:01.618 "reset": true, 00:09:01.618 "nvme_admin": false, 00:09:01.618 "nvme_io": false, 00:09:01.618 "nvme_io_md": false, 00:09:01.618 "write_zeroes": true, 00:09:01.618 "zcopy": true, 00:09:01.618 "get_zone_info": false, 00:09:01.618 "zone_management": false, 00:09:01.618 "zone_append": false, 00:09:01.618 "compare": false, 00:09:01.618 "compare_and_write": false, 00:09:01.618 "abort": true, 00:09:01.618 "seek_hole": false, 00:09:01.618 "seek_data": false, 00:09:01.618 "copy": true, 00:09:01.618 "nvme_iov_md": false 00:09:01.618 }, 00:09:01.618 "memory_domains": [ 00:09:01.618 { 00:09:01.618 "dma_device_id": "system", 00:09:01.618 "dma_device_type": 1 00:09:01.618 }, 00:09:01.618 { 00:09:01.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.618 "dma_device_type": 2 00:09:01.618 } 00:09:01.618 ], 00:09:01.618 "driver_specific": {} 00:09:01.618 }' 00:09:01.618 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:01.876 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:01.876 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:01.876 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:01.876 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:01.876 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:01.877 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:01.877 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:01.877 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:01.877 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:02.136 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:02.136 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:02.136 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:02.136 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:02.136 11:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:02.394 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:02.394 "name": "BaseBdev2", 00:09:02.394 "aliases": [ 00:09:02.394 "cc9a31ed-b351-4c85-b525-80594631dcec" 00:09:02.394 ], 00:09:02.394 "product_name": "Malloc disk", 00:09:02.394 "block_size": 512, 00:09:02.394 "num_blocks": 65536, 00:09:02.394 "uuid": "cc9a31ed-b351-4c85-b525-80594631dcec", 00:09:02.394 "assigned_rate_limits": { 00:09:02.394 "rw_ios_per_sec": 0, 00:09:02.394 "rw_mbytes_per_sec": 0, 00:09:02.394 "r_mbytes_per_sec": 0, 00:09:02.394 "w_mbytes_per_sec": 0 00:09:02.394 }, 00:09:02.394 "claimed": true, 00:09:02.394 "claim_type": "exclusive_write", 00:09:02.394 "zoned": false, 00:09:02.394 "supported_io_types": { 00:09:02.394 "read": true, 00:09:02.394 "write": true, 00:09:02.394 "unmap": true, 00:09:02.394 "flush": true, 00:09:02.394 "reset": true, 00:09:02.394 "nvme_admin": false, 00:09:02.394 "nvme_io": false, 00:09:02.394 "nvme_io_md": false, 00:09:02.394 "write_zeroes": true, 00:09:02.394 "zcopy": true, 00:09:02.394 "get_zone_info": false, 00:09:02.394 "zone_management": false, 00:09:02.394 "zone_append": false, 00:09:02.394 "compare": false, 00:09:02.394 "compare_and_write": false, 00:09:02.394 "abort": true, 00:09:02.394 "seek_hole": false, 00:09:02.394 "seek_data": false, 00:09:02.394 "copy": true, 00:09:02.394 "nvme_iov_md": false 00:09:02.394 }, 00:09:02.394 "memory_domains": [ 00:09:02.394 { 00:09:02.394 "dma_device_id": "system", 00:09:02.394 "dma_device_type": 1 00:09:02.394 }, 00:09:02.394 { 00:09:02.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.394 "dma_device_type": 2 00:09:02.394 } 00:09:02.394 ], 00:09:02.394 "driver_specific": {} 00:09:02.394 }' 00:09:02.394 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:02.394 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:02.394 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:02.394 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:02.394 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:02.652 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:02.652 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:02.652 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:02.652 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:02.652 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:02.652 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:02.652 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:02.652 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:02.910 [2024-07-25 11:20:18.736588] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:03.168 11:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.424 11:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:03.424 "name": "Existed_Raid", 00:09:03.424 "uuid": "b030db89-717a-429c-bae9-634b32e9c523", 00:09:03.424 "strip_size_kb": 0, 00:09:03.424 "state": "online", 00:09:03.424 "raid_level": "raid1", 00:09:03.424 "superblock": false, 00:09:03.424 "num_base_bdevs": 2, 00:09:03.424 "num_base_bdevs_discovered": 1, 00:09:03.424 "num_base_bdevs_operational": 1, 00:09:03.424 "base_bdevs_list": [ 00:09:03.424 { 00:09:03.424 "name": null, 00:09:03.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.424 "is_configured": false, 00:09:03.424 "data_offset": 0, 00:09:03.424 "data_size": 65536 00:09:03.424 }, 00:09:03.424 { 00:09:03.424 "name": "BaseBdev2", 00:09:03.424 "uuid": "cc9a31ed-b351-4c85-b525-80594631dcec", 00:09:03.424 "is_configured": true, 00:09:03.424 "data_offset": 0, 00:09:03.424 "data_size": 65536 00:09:03.424 } 00:09:03.424 ] 00:09:03.424 }' 00:09:03.424 11:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:03.424 11:20:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.988 11:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:03.988 11:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:03.988 11:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:03.988 11:20:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:04.246 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:04.246 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.246 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:04.504 [2024-07-25 11:20:20.332294] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.504 [2024-07-25 11:20:20.332425] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.762 [2024-07-25 11:20:20.416866] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.762 [2024-07-25 11:20:20.416945] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.762 [2024-07-25 11:20:20.416961] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:04.762 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:04.762 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:04.762 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.762 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 65591 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65591 ']' 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65591 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65591 00:09:05.022 killing process with pid 65591 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65591' 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65591 00:09:05.022 [2024-07-25 11:20:20.691947] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.022 11:20:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65591 00:09:05.022 [2024-07-25 11:20:20.706519] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:06.403 00:09:06.403 real 0m12.815s 00:09:06.403 user 0m22.306s 00:09:06.403 sys 0m1.684s 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.403 ************************************ 00:09:06.403 END TEST raid_state_function_test 00:09:06.403 ************************************ 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.403 11:20:21 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:06.403 11:20:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:06.403 11:20:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.403 11:20:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.403 ************************************ 00:09:06.403 START TEST raid_state_function_test_sb 00:09:06.403 ************************************ 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:06.403 Process raid pid: 65964 00:09:06.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=65964 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65964' 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 65964 /var/tmp/spdk-raid.sock 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 65964 ']' 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.403 11:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.403 [2024-07-25 11:20:22.029368] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:06.403 [2024-07-25 11:20:22.029805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.403 [2024-07-25 11:20:22.206613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.661 [2024-07-25 11:20:22.460086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.918 [2024-07-25 11:20:22.665323] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.918 [2024-07-25 11:20:22.665660] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.175 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.175 11:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:07.175 11:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:07.433 [2024-07-25 11:20:23.226033] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.433 [2024-07-25 11:20:23.226113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.433 [2024-07-25 11:20:23.226133] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.433 [2024-07-25 11:20:23.226147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.433 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.689 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:07.689 "name": "Existed_Raid", 00:09:07.689 "uuid": "bd62516a-c327-48e1-892a-c6f1c403b99a", 00:09:07.689 "strip_size_kb": 0, 00:09:07.689 "state": "configuring", 00:09:07.689 "raid_level": "raid1", 00:09:07.689 "superblock": true, 00:09:07.689 "num_base_bdevs": 2, 00:09:07.689 "num_base_bdevs_discovered": 0, 00:09:07.689 "num_base_bdevs_operational": 2, 00:09:07.689 "base_bdevs_list": [ 00:09:07.689 { 00:09:07.689 "name": "BaseBdev1", 00:09:07.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.689 "is_configured": false, 00:09:07.689 "data_offset": 0, 00:09:07.689 "data_size": 0 00:09:07.689 }, 00:09:07.689 { 00:09:07.689 "name": "BaseBdev2", 00:09:07.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.689 "is_configured": false, 00:09:07.689 "data_offset": 0, 00:09:07.689 "data_size": 0 00:09:07.689 } 00:09:07.689 ] 00:09:07.689 }' 00:09:07.689 11:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:07.689 11:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.621 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:08.621 [2024-07-25 11:20:24.402156] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.621 [2024-07-25 11:20:24.402201] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:08.621 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:08.879 [2024-07-25 11:20:24.678235] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.879 [2024-07-25 11:20:24.678296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.879 [2024-07-25 11:20:24.678316] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.879 [2024-07-25 11:20:24.678329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.879 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.137 [2024-07-25 11:20:24.941974] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.137 BaseBdev1 00:09:09.137 11:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:09.137 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:09.137 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.137 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:09.137 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.137 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.137 11:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:09.395 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.653 [ 00:09:09.653 { 00:09:09.653 "name": "BaseBdev1", 00:09:09.653 "aliases": [ 00:09:09.653 "0c6bd1b3-4ac6-459c-a148-5e459e893219" 00:09:09.653 ], 00:09:09.653 "product_name": "Malloc disk", 00:09:09.653 "block_size": 512, 00:09:09.653 "num_blocks": 65536, 00:09:09.653 "uuid": "0c6bd1b3-4ac6-459c-a148-5e459e893219", 00:09:09.653 "assigned_rate_limits": { 00:09:09.653 "rw_ios_per_sec": 0, 00:09:09.653 "rw_mbytes_per_sec": 0, 00:09:09.653 "r_mbytes_per_sec": 0, 00:09:09.653 "w_mbytes_per_sec": 0 00:09:09.653 }, 00:09:09.653 "claimed": true, 00:09:09.653 "claim_type": "exclusive_write", 00:09:09.653 "zoned": false, 00:09:09.653 "supported_io_types": { 00:09:09.653 "read": true, 00:09:09.653 "write": true, 00:09:09.653 "unmap": true, 00:09:09.653 "flush": true, 00:09:09.653 "reset": true, 00:09:09.653 "nvme_admin": false, 00:09:09.653 "nvme_io": false, 00:09:09.653 "nvme_io_md": false, 00:09:09.653 "write_zeroes": true, 00:09:09.653 "zcopy": true, 00:09:09.653 "get_zone_info": false, 00:09:09.653 "zone_management": false, 00:09:09.653 "zone_append": false, 00:09:09.653 "compare": false, 00:09:09.653 "compare_and_write": false, 00:09:09.653 "abort": true, 00:09:09.653 "seek_hole": false, 00:09:09.653 "seek_data": false, 00:09:09.653 "copy": true, 00:09:09.653 "nvme_iov_md": false 00:09:09.653 }, 00:09:09.653 "memory_domains": [ 00:09:09.653 { 00:09:09.654 "dma_device_id": "system", 00:09:09.654 "dma_device_type": 1 00:09:09.654 }, 00:09:09.654 { 00:09:09.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.654 "dma_device_type": 2 00:09:09.654 } 00:09:09.654 ], 00:09:09.654 "driver_specific": {} 00:09:09.654 } 00:09:09.654 ] 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.654 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.220 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:10.220 "name": "Existed_Raid", 00:09:10.220 "uuid": "0eef4f15-5138-4013-a3aa-7b9040dfcfb5", 00:09:10.220 "strip_size_kb": 0, 00:09:10.220 "state": "configuring", 00:09:10.220 "raid_level": "raid1", 00:09:10.220 "superblock": true, 00:09:10.220 "num_base_bdevs": 2, 00:09:10.220 "num_base_bdevs_discovered": 1, 00:09:10.220 "num_base_bdevs_operational": 2, 00:09:10.220 "base_bdevs_list": [ 00:09:10.220 { 00:09:10.220 "name": "BaseBdev1", 00:09:10.220 "uuid": "0c6bd1b3-4ac6-459c-a148-5e459e893219", 00:09:10.220 "is_configured": true, 00:09:10.220 "data_offset": 2048, 00:09:10.220 "data_size": 63488 00:09:10.220 }, 00:09:10.220 { 00:09:10.220 "name": "BaseBdev2", 00:09:10.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.220 "is_configured": false, 00:09:10.220 "data_offset": 0, 00:09:10.220 "data_size": 0 00:09:10.220 } 00:09:10.220 ] 00:09:10.220 }' 00:09:10.220 11:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:10.220 11:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.787 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:11.046 [2024-07-25 11:20:26.746528] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.046 [2024-07-25 11:20:26.746618] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:11.046 11:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:11.305 [2024-07-25 11:20:27.022677] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.305 [2024-07-25 11:20:27.024979] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.305 [2024-07-25 11:20:27.025032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.305 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.564 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:11.564 "name": "Existed_Raid", 00:09:11.564 "uuid": "751ce9fb-7fa1-4c1e-92ae-2470e9320dcc", 00:09:11.564 "strip_size_kb": 0, 00:09:11.564 "state": "configuring", 00:09:11.564 "raid_level": "raid1", 00:09:11.564 "superblock": true, 00:09:11.564 "num_base_bdevs": 2, 00:09:11.564 "num_base_bdevs_discovered": 1, 00:09:11.564 "num_base_bdevs_operational": 2, 00:09:11.564 "base_bdevs_list": [ 00:09:11.564 { 00:09:11.564 "name": "BaseBdev1", 00:09:11.564 "uuid": "0c6bd1b3-4ac6-459c-a148-5e459e893219", 00:09:11.564 "is_configured": true, 00:09:11.564 "data_offset": 2048, 00:09:11.564 "data_size": 63488 00:09:11.564 }, 00:09:11.564 { 00:09:11.564 "name": "BaseBdev2", 00:09:11.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.564 "is_configured": false, 00:09:11.564 "data_offset": 0, 00:09:11.564 "data_size": 0 00:09:11.564 } 00:09:11.564 ] 00:09:11.564 }' 00:09:11.564 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:11.564 11:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.130 11:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.388 [2024-07-25 11:20:28.156456] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.388 BaseBdev2 00:09:12.388 [2024-07-25 11:20:28.158682] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:12.388 [2024-07-25 11:20:28.158711] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:12.388 [2024-07-25 11:20:28.159034] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:12.388 [2024-07-25 11:20:28.159223] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:12.388 [2024-07-25 11:20:28.159239] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:12.388 [2024-07-25 11:20:28.159419] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.388 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:12.388 11:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:12.388 11:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.388 11:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:12.388 11:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.388 11:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.388 11:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:12.646 11:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.903 [ 00:09:12.903 { 00:09:12.903 "name": "BaseBdev2", 00:09:12.903 "aliases": [ 00:09:12.903 "2a1bffe8-2da7-481b-862a-281ae37a1343" 00:09:12.903 ], 00:09:12.903 "product_name": "Malloc disk", 00:09:12.903 "block_size": 512, 00:09:12.903 "num_blocks": 65536, 00:09:12.903 "uuid": "2a1bffe8-2da7-481b-862a-281ae37a1343", 00:09:12.903 "assigned_rate_limits": { 00:09:12.903 "rw_ios_per_sec": 0, 00:09:12.903 "rw_mbytes_per_sec": 0, 00:09:12.903 "r_mbytes_per_sec": 0, 00:09:12.903 "w_mbytes_per_sec": 0 00:09:12.903 }, 00:09:12.903 "claimed": true, 00:09:12.903 "claim_type": "exclusive_write", 00:09:12.903 "zoned": false, 00:09:12.903 "supported_io_types": { 00:09:12.903 "read": true, 00:09:12.903 "write": true, 00:09:12.903 "unmap": true, 00:09:12.903 "flush": true, 00:09:12.903 "reset": true, 00:09:12.903 "nvme_admin": false, 00:09:12.903 "nvme_io": false, 00:09:12.903 "nvme_io_md": false, 00:09:12.903 "write_zeroes": true, 00:09:12.903 "zcopy": true, 00:09:12.903 "get_zone_info": false, 00:09:12.903 "zone_management": false, 00:09:12.903 "zone_append": false, 00:09:12.903 "compare": false, 00:09:12.903 "compare_and_write": false, 00:09:12.903 "abort": true, 00:09:12.903 "seek_hole": false, 00:09:12.903 "seek_data": false, 00:09:12.903 "copy": true, 00:09:12.903 "nvme_iov_md": false 00:09:12.903 }, 00:09:12.903 "memory_domains": [ 00:09:12.903 { 00:09:12.903 "dma_device_id": "system", 00:09:12.903 "dma_device_type": 1 00:09:12.903 }, 00:09:12.903 { 00:09:12.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.903 "dma_device_type": 2 00:09:12.903 } 00:09:12.903 ], 00:09:12.903 "driver_specific": {} 00:09:12.903 } 00:09:12.903 ] 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.903 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.161 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:13.161 "name": "Existed_Raid", 00:09:13.161 "uuid": "751ce9fb-7fa1-4c1e-92ae-2470e9320dcc", 00:09:13.161 "strip_size_kb": 0, 00:09:13.161 "state": "online", 00:09:13.161 "raid_level": "raid1", 00:09:13.161 "superblock": true, 00:09:13.161 "num_base_bdevs": 2, 00:09:13.161 "num_base_bdevs_discovered": 2, 00:09:13.161 "num_base_bdevs_operational": 2, 00:09:13.161 "base_bdevs_list": [ 00:09:13.161 { 00:09:13.161 "name": "BaseBdev1", 00:09:13.161 "uuid": "0c6bd1b3-4ac6-459c-a148-5e459e893219", 00:09:13.161 "is_configured": true, 00:09:13.161 "data_offset": 2048, 00:09:13.161 "data_size": 63488 00:09:13.161 }, 00:09:13.161 { 00:09:13.161 "name": "BaseBdev2", 00:09:13.161 "uuid": "2a1bffe8-2da7-481b-862a-281ae37a1343", 00:09:13.161 "is_configured": true, 00:09:13.161 "data_offset": 2048, 00:09:13.161 "data_size": 63488 00:09:13.161 } 00:09:13.161 ] 00:09:13.161 }' 00:09:13.161 11:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:13.161 11:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.095 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.095 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:14.095 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:14.095 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:14.095 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:14.096 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:14.096 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:14.096 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:14.096 [2024-07-25 11:20:29.929373] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.096 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:14.096 "name": "Existed_Raid", 00:09:14.096 "aliases": [ 00:09:14.096 "751ce9fb-7fa1-4c1e-92ae-2470e9320dcc" 00:09:14.096 ], 00:09:14.096 "product_name": "Raid Volume", 00:09:14.096 "block_size": 512, 00:09:14.096 "num_blocks": 63488, 00:09:14.096 "uuid": "751ce9fb-7fa1-4c1e-92ae-2470e9320dcc", 00:09:14.096 "assigned_rate_limits": { 00:09:14.096 "rw_ios_per_sec": 0, 00:09:14.096 "rw_mbytes_per_sec": 0, 00:09:14.096 "r_mbytes_per_sec": 0, 00:09:14.096 "w_mbytes_per_sec": 0 00:09:14.096 }, 00:09:14.096 "claimed": false, 00:09:14.096 "zoned": false, 00:09:14.096 "supported_io_types": { 00:09:14.096 "read": true, 00:09:14.096 "write": true, 00:09:14.096 "unmap": false, 00:09:14.096 "flush": false, 00:09:14.096 "reset": true, 00:09:14.096 "nvme_admin": false, 00:09:14.096 "nvme_io": false, 00:09:14.096 "nvme_io_md": false, 00:09:14.096 "write_zeroes": true, 00:09:14.096 "zcopy": false, 00:09:14.096 "get_zone_info": false, 00:09:14.096 "zone_management": false, 00:09:14.096 "zone_append": false, 00:09:14.096 "compare": false, 00:09:14.096 "compare_and_write": false, 00:09:14.096 "abort": false, 00:09:14.096 "seek_hole": false, 00:09:14.096 "seek_data": false, 00:09:14.096 "copy": false, 00:09:14.096 "nvme_iov_md": false 00:09:14.096 }, 00:09:14.096 "memory_domains": [ 00:09:14.096 { 00:09:14.096 "dma_device_id": "system", 00:09:14.096 "dma_device_type": 1 00:09:14.096 }, 00:09:14.096 { 00:09:14.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.096 "dma_device_type": 2 00:09:14.096 }, 00:09:14.096 { 00:09:14.096 "dma_device_id": "system", 00:09:14.096 "dma_device_type": 1 00:09:14.096 }, 00:09:14.096 { 00:09:14.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.096 "dma_device_type": 2 00:09:14.096 } 00:09:14.096 ], 00:09:14.096 "driver_specific": { 00:09:14.096 "raid": { 00:09:14.096 "uuid": "751ce9fb-7fa1-4c1e-92ae-2470e9320dcc", 00:09:14.096 "strip_size_kb": 0, 00:09:14.096 "state": "online", 00:09:14.096 "raid_level": "raid1", 00:09:14.096 "superblock": true, 00:09:14.096 "num_base_bdevs": 2, 00:09:14.096 "num_base_bdevs_discovered": 2, 00:09:14.096 "num_base_bdevs_operational": 2, 00:09:14.096 "base_bdevs_list": [ 00:09:14.096 { 00:09:14.096 "name": "BaseBdev1", 00:09:14.096 "uuid": "0c6bd1b3-4ac6-459c-a148-5e459e893219", 00:09:14.096 "is_configured": true, 00:09:14.096 "data_offset": 2048, 00:09:14.096 "data_size": 63488 00:09:14.096 }, 00:09:14.096 { 00:09:14.096 "name": "BaseBdev2", 00:09:14.096 "uuid": "2a1bffe8-2da7-481b-862a-281ae37a1343", 00:09:14.096 "is_configured": true, 00:09:14.096 "data_offset": 2048, 00:09:14.096 "data_size": 63488 00:09:14.096 } 00:09:14.096 ] 00:09:14.096 } 00:09:14.096 } 00:09:14.096 }' 00:09:14.096 11:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.355 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:14.355 BaseBdev2' 00:09:14.355 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:14.355 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:14.355 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:14.613 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:14.613 "name": "BaseBdev1", 00:09:14.613 "aliases": [ 00:09:14.613 "0c6bd1b3-4ac6-459c-a148-5e459e893219" 00:09:14.613 ], 00:09:14.613 "product_name": "Malloc disk", 00:09:14.613 "block_size": 512, 00:09:14.613 "num_blocks": 65536, 00:09:14.613 "uuid": "0c6bd1b3-4ac6-459c-a148-5e459e893219", 00:09:14.613 "assigned_rate_limits": { 00:09:14.613 "rw_ios_per_sec": 0, 00:09:14.613 "rw_mbytes_per_sec": 0, 00:09:14.613 "r_mbytes_per_sec": 0, 00:09:14.613 "w_mbytes_per_sec": 0 00:09:14.613 }, 00:09:14.613 "claimed": true, 00:09:14.613 "claim_type": "exclusive_write", 00:09:14.613 "zoned": false, 00:09:14.613 "supported_io_types": { 00:09:14.613 "read": true, 00:09:14.613 "write": true, 00:09:14.613 "unmap": true, 00:09:14.613 "flush": true, 00:09:14.613 "reset": true, 00:09:14.613 "nvme_admin": false, 00:09:14.613 "nvme_io": false, 00:09:14.613 "nvme_io_md": false, 00:09:14.613 "write_zeroes": true, 00:09:14.613 "zcopy": true, 00:09:14.613 "get_zone_info": false, 00:09:14.613 "zone_management": false, 00:09:14.613 "zone_append": false, 00:09:14.613 "compare": false, 00:09:14.613 "compare_and_write": false, 00:09:14.613 "abort": true, 00:09:14.613 "seek_hole": false, 00:09:14.613 "seek_data": false, 00:09:14.613 "copy": true, 00:09:14.613 "nvme_iov_md": false 00:09:14.613 }, 00:09:14.613 "memory_domains": [ 00:09:14.613 { 00:09:14.613 "dma_device_id": "system", 00:09:14.613 "dma_device_type": 1 00:09:14.613 }, 00:09:14.613 { 00:09:14.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.613 "dma_device_type": 2 00:09:14.613 } 00:09:14.613 ], 00:09:14.613 "driver_specific": {} 00:09:14.613 }' 00:09:14.613 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:14.613 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:14.613 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:14.613 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:14.613 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:14.613 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:14.613 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:14.872 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:14.872 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:14.872 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:14.872 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:14.872 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:14.872 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:14.872 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:14.872 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:15.131 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:15.131 "name": "BaseBdev2", 00:09:15.131 "aliases": [ 00:09:15.131 "2a1bffe8-2da7-481b-862a-281ae37a1343" 00:09:15.131 ], 00:09:15.131 "product_name": "Malloc disk", 00:09:15.131 "block_size": 512, 00:09:15.131 "num_blocks": 65536, 00:09:15.131 "uuid": "2a1bffe8-2da7-481b-862a-281ae37a1343", 00:09:15.131 "assigned_rate_limits": { 00:09:15.131 "rw_ios_per_sec": 0, 00:09:15.131 "rw_mbytes_per_sec": 0, 00:09:15.131 "r_mbytes_per_sec": 0, 00:09:15.131 "w_mbytes_per_sec": 0 00:09:15.131 }, 00:09:15.131 "claimed": true, 00:09:15.131 "claim_type": "exclusive_write", 00:09:15.131 "zoned": false, 00:09:15.131 "supported_io_types": { 00:09:15.131 "read": true, 00:09:15.131 "write": true, 00:09:15.131 "unmap": true, 00:09:15.131 "flush": true, 00:09:15.131 "reset": true, 00:09:15.131 "nvme_admin": false, 00:09:15.131 "nvme_io": false, 00:09:15.131 "nvme_io_md": false, 00:09:15.131 "write_zeroes": true, 00:09:15.131 "zcopy": true, 00:09:15.131 "get_zone_info": false, 00:09:15.131 "zone_management": false, 00:09:15.131 "zone_append": false, 00:09:15.131 "compare": false, 00:09:15.131 "compare_and_write": false, 00:09:15.131 "abort": true, 00:09:15.131 "seek_hole": false, 00:09:15.131 "seek_data": false, 00:09:15.131 "copy": true, 00:09:15.131 "nvme_iov_md": false 00:09:15.131 }, 00:09:15.131 "memory_domains": [ 00:09:15.131 { 00:09:15.131 "dma_device_id": "system", 00:09:15.131 "dma_device_type": 1 00:09:15.131 }, 00:09:15.131 { 00:09:15.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.131 "dma_device_type": 2 00:09:15.131 } 00:09:15.131 ], 00:09:15.131 "driver_specific": {} 00:09:15.131 }' 00:09:15.131 11:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:15.390 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:15.390 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:15.390 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:15.390 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:15.390 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:15.391 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:15.391 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:15.649 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:15.649 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:15.649 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:15.649 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:15.649 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:15.906 [2024-07-25 11:20:31.653541] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.906 11:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.473 11:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:16.473 "name": "Existed_Raid", 00:09:16.473 "uuid": "751ce9fb-7fa1-4c1e-92ae-2470e9320dcc", 00:09:16.473 "strip_size_kb": 0, 00:09:16.473 "state": "online", 00:09:16.473 "raid_level": "raid1", 00:09:16.473 "superblock": true, 00:09:16.473 "num_base_bdevs": 2, 00:09:16.473 "num_base_bdevs_discovered": 1, 00:09:16.473 "num_base_bdevs_operational": 1, 00:09:16.473 "base_bdevs_list": [ 00:09:16.473 { 00:09:16.473 "name": null, 00:09:16.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.473 "is_configured": false, 00:09:16.473 "data_offset": 2048, 00:09:16.473 "data_size": 63488 00:09:16.473 }, 00:09:16.473 { 00:09:16.473 "name": "BaseBdev2", 00:09:16.473 "uuid": "2a1bffe8-2da7-481b-862a-281ae37a1343", 00:09:16.473 "is_configured": true, 00:09:16.473 "data_offset": 2048, 00:09:16.473 "data_size": 63488 00:09:16.473 } 00:09:16.473 ] 00:09:16.473 }' 00:09:16.473 11:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:16.473 11:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.040 11:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:17.040 11:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:17.040 11:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.040 11:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:17.299 11:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:17.299 11:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.299 11:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:17.557 [2024-07-25 11:20:33.197721] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.557 [2024-07-25 11:20:33.198070] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.557 [2024-07-25 11:20:33.279648] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.557 [2024-07-25 11:20:33.279710] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.557 [2024-07-25 11:20:33.279735] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:17.557 11:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:17.557 11:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:17.557 11:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.557 11:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.815 11:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:17.815 11:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:17.815 11:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:17.815 11:20:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 65964 00:09:17.815 11:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 65964 ']' 00:09:17.816 11:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 65964 00:09:17.816 11:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:17.816 11:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.816 11:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65964 00:09:17.816 killing process with pid 65964 00:09:17.816 11:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.816 11:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.816 11:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65964' 00:09:17.816 11:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 65964 00:09:17.816 [2024-07-25 11:20:33.602760] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.816 11:20:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 65964 00:09:17.816 [2024-07-25 11:20:33.617202] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.190 ************************************ 00:09:19.190 END TEST raid_state_function_test_sb 00:09:19.190 ************************************ 00:09:19.190 11:20:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:19.190 00:09:19.190 real 0m12.836s 00:09:19.190 user 0m22.362s 00:09:19.190 sys 0m1.652s 00:09:19.190 11:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.190 11:20:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.190 11:20:34 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:19.190 11:20:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:19.190 11:20:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.190 11:20:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.190 ************************************ 00:09:19.190 START TEST raid_superblock_test 00:09:19.191 ************************************ 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=66337 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 66337 /var/tmp/spdk-raid.sock 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66337 ']' 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:19.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.191 11:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.191 [2024-07-25 11:20:34.897300] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:19.191 [2024-07-25 11:20:34.897443] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66337 ] 00:09:19.191 [2024-07-25 11:20:35.059917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.448 [2024-07-25 11:20:35.290407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.706 [2024-07-25 11:20:35.490094] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.706 [2024-07-25 11:20:35.490171] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.271 11:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.272 11:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:20.272 11:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:09:20.272 11:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:09:20.272 11:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:09:20.272 11:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:09:20.272 11:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:20.272 11:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:20.272 11:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:09:20.272 11:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:20.272 11:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:20.272 malloc1 00:09:20.530 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.530 [2024-07-25 11:20:36.390490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.530 [2024-07-25 11:20:36.390582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.530 [2024-07-25 11:20:36.390615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:20.530 [2024-07-25 11:20:36.390656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.530 [2024-07-25 11:20:36.393393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.530 [2024-07-25 11:20:36.393444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.530 pt1 00:09:20.787 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:09:20.787 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:09:20.787 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:09:20.787 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:09:20.787 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:20.787 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:20.787 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:09:20.787 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:20.787 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:21.043 malloc2 00:09:21.043 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.300 [2024-07-25 11:20:36.961829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.300 [2024-07-25 11:20:36.961925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.300 [2024-07-25 11:20:36.961956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:21.300 [2024-07-25 11:20:36.961979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.300 [2024-07-25 11:20:36.964711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.300 [2024-07-25 11:20:36.964762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.300 pt2 00:09:21.300 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:09:21.300 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:09:21.300 11:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:09:21.558 [2024-07-25 11:20:37.181938] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:21.558 [2024-07-25 11:20:37.184314] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:21.558 [2024-07-25 11:20:37.184552] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:21.558 [2024-07-25 11:20:37.184577] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:21.558 [2024-07-25 11:20:37.184945] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:21.558 [2024-07-25 11:20:37.185175] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:21.558 [2024-07-25 11:20:37.185193] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:21.558 [2024-07-25 11:20:37.185398] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.558 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.815 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:21.815 "name": "raid_bdev1", 00:09:21.815 "uuid": "76c26406-f669-494a-b2b3-2499069dd40b", 00:09:21.815 "strip_size_kb": 0, 00:09:21.815 "state": "online", 00:09:21.815 "raid_level": "raid1", 00:09:21.815 "superblock": true, 00:09:21.815 "num_base_bdevs": 2, 00:09:21.815 "num_base_bdevs_discovered": 2, 00:09:21.815 "num_base_bdevs_operational": 2, 00:09:21.815 "base_bdevs_list": [ 00:09:21.815 { 00:09:21.815 "name": "pt1", 00:09:21.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.815 "is_configured": true, 00:09:21.815 "data_offset": 2048, 00:09:21.815 "data_size": 63488 00:09:21.815 }, 00:09:21.815 { 00:09:21.815 "name": "pt2", 00:09:21.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.815 "is_configured": true, 00:09:21.815 "data_offset": 2048, 00:09:21.815 "data_size": 63488 00:09:21.815 } 00:09:21.815 ] 00:09:21.815 }' 00:09:21.815 11:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:21.815 11:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.380 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:09:22.380 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:22.380 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:22.380 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:22.380 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:22.381 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:22.381 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:22.381 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:22.638 [2024-07-25 11:20:38.326535] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.638 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:22.638 "name": "raid_bdev1", 00:09:22.638 "aliases": [ 00:09:22.638 "76c26406-f669-494a-b2b3-2499069dd40b" 00:09:22.638 ], 00:09:22.638 "product_name": "Raid Volume", 00:09:22.638 "block_size": 512, 00:09:22.638 "num_blocks": 63488, 00:09:22.638 "uuid": "76c26406-f669-494a-b2b3-2499069dd40b", 00:09:22.638 "assigned_rate_limits": { 00:09:22.638 "rw_ios_per_sec": 0, 00:09:22.638 "rw_mbytes_per_sec": 0, 00:09:22.638 "r_mbytes_per_sec": 0, 00:09:22.638 "w_mbytes_per_sec": 0 00:09:22.638 }, 00:09:22.638 "claimed": false, 00:09:22.638 "zoned": false, 00:09:22.638 "supported_io_types": { 00:09:22.638 "read": true, 00:09:22.638 "write": true, 00:09:22.638 "unmap": false, 00:09:22.638 "flush": false, 00:09:22.638 "reset": true, 00:09:22.638 "nvme_admin": false, 00:09:22.638 "nvme_io": false, 00:09:22.638 "nvme_io_md": false, 00:09:22.638 "write_zeroes": true, 00:09:22.638 "zcopy": false, 00:09:22.638 "get_zone_info": false, 00:09:22.638 "zone_management": false, 00:09:22.638 "zone_append": false, 00:09:22.638 "compare": false, 00:09:22.638 "compare_and_write": false, 00:09:22.638 "abort": false, 00:09:22.638 "seek_hole": false, 00:09:22.638 "seek_data": false, 00:09:22.638 "copy": false, 00:09:22.638 "nvme_iov_md": false 00:09:22.638 }, 00:09:22.638 "memory_domains": [ 00:09:22.638 { 00:09:22.638 "dma_device_id": "system", 00:09:22.638 "dma_device_type": 1 00:09:22.638 }, 00:09:22.638 { 00:09:22.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.638 "dma_device_type": 2 00:09:22.638 }, 00:09:22.638 { 00:09:22.638 "dma_device_id": "system", 00:09:22.638 "dma_device_type": 1 00:09:22.638 }, 00:09:22.638 { 00:09:22.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.638 "dma_device_type": 2 00:09:22.638 } 00:09:22.638 ], 00:09:22.638 "driver_specific": { 00:09:22.638 "raid": { 00:09:22.638 "uuid": "76c26406-f669-494a-b2b3-2499069dd40b", 00:09:22.638 "strip_size_kb": 0, 00:09:22.638 "state": "online", 00:09:22.638 "raid_level": "raid1", 00:09:22.638 "superblock": true, 00:09:22.638 "num_base_bdevs": 2, 00:09:22.638 "num_base_bdevs_discovered": 2, 00:09:22.638 "num_base_bdevs_operational": 2, 00:09:22.638 "base_bdevs_list": [ 00:09:22.638 { 00:09:22.638 "name": "pt1", 00:09:22.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.638 "is_configured": true, 00:09:22.638 "data_offset": 2048, 00:09:22.638 "data_size": 63488 00:09:22.638 }, 00:09:22.638 { 00:09:22.638 "name": "pt2", 00:09:22.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.638 "is_configured": true, 00:09:22.638 "data_offset": 2048, 00:09:22.638 "data_size": 63488 00:09:22.639 } 00:09:22.639 ] 00:09:22.639 } 00:09:22.639 } 00:09:22.639 }' 00:09:22.639 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.639 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:22.639 pt2' 00:09:22.639 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:22.639 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:22.639 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:22.897 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:22.897 "name": "pt1", 00:09:22.897 "aliases": [ 00:09:22.897 "00000000-0000-0000-0000-000000000001" 00:09:22.897 ], 00:09:22.897 "product_name": "passthru", 00:09:22.897 "block_size": 512, 00:09:22.897 "num_blocks": 65536, 00:09:22.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.897 "assigned_rate_limits": { 00:09:22.897 "rw_ios_per_sec": 0, 00:09:22.897 "rw_mbytes_per_sec": 0, 00:09:22.897 "r_mbytes_per_sec": 0, 00:09:22.897 "w_mbytes_per_sec": 0 00:09:22.897 }, 00:09:22.897 "claimed": true, 00:09:22.897 "claim_type": "exclusive_write", 00:09:22.897 "zoned": false, 00:09:22.897 "supported_io_types": { 00:09:22.897 "read": true, 00:09:22.897 "write": true, 00:09:22.897 "unmap": true, 00:09:22.897 "flush": true, 00:09:22.897 "reset": true, 00:09:22.897 "nvme_admin": false, 00:09:22.897 "nvme_io": false, 00:09:22.897 "nvme_io_md": false, 00:09:22.897 "write_zeroes": true, 00:09:22.897 "zcopy": true, 00:09:22.897 "get_zone_info": false, 00:09:22.897 "zone_management": false, 00:09:22.897 "zone_append": false, 00:09:22.897 "compare": false, 00:09:22.897 "compare_and_write": false, 00:09:22.897 "abort": true, 00:09:22.897 "seek_hole": false, 00:09:22.897 "seek_data": false, 00:09:22.897 "copy": true, 00:09:22.897 "nvme_iov_md": false 00:09:22.897 }, 00:09:22.897 "memory_domains": [ 00:09:22.897 { 00:09:22.897 "dma_device_id": "system", 00:09:22.897 "dma_device_type": 1 00:09:22.897 }, 00:09:22.897 { 00:09:22.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.897 "dma_device_type": 2 00:09:22.897 } 00:09:22.897 ], 00:09:22.897 "driver_specific": { 00:09:22.897 "passthru": { 00:09:22.897 "name": "pt1", 00:09:22.897 "base_bdev_name": "malloc1" 00:09:22.897 } 00:09:22.897 } 00:09:22.897 }' 00:09:22.897 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:22.897 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:22.897 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:22.897 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.155 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.155 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:23.155 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:23.155 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:23.155 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:23.155 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:23.155 11:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:23.155 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:23.155 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:23.155 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:23.155 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:23.720 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:23.720 "name": "pt2", 00:09:23.720 "aliases": [ 00:09:23.720 "00000000-0000-0000-0000-000000000002" 00:09:23.720 ], 00:09:23.720 "product_name": "passthru", 00:09:23.720 "block_size": 512, 00:09:23.720 "num_blocks": 65536, 00:09:23.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.720 "assigned_rate_limits": { 00:09:23.720 "rw_ios_per_sec": 0, 00:09:23.720 "rw_mbytes_per_sec": 0, 00:09:23.720 "r_mbytes_per_sec": 0, 00:09:23.720 "w_mbytes_per_sec": 0 00:09:23.720 }, 00:09:23.720 "claimed": true, 00:09:23.720 "claim_type": "exclusive_write", 00:09:23.720 "zoned": false, 00:09:23.720 "supported_io_types": { 00:09:23.720 "read": true, 00:09:23.720 "write": true, 00:09:23.720 "unmap": true, 00:09:23.720 "flush": true, 00:09:23.720 "reset": true, 00:09:23.720 "nvme_admin": false, 00:09:23.720 "nvme_io": false, 00:09:23.720 "nvme_io_md": false, 00:09:23.720 "write_zeroes": true, 00:09:23.720 "zcopy": true, 00:09:23.720 "get_zone_info": false, 00:09:23.720 "zone_management": false, 00:09:23.720 "zone_append": false, 00:09:23.720 "compare": false, 00:09:23.720 "compare_and_write": false, 00:09:23.720 "abort": true, 00:09:23.720 "seek_hole": false, 00:09:23.720 "seek_data": false, 00:09:23.720 "copy": true, 00:09:23.720 "nvme_iov_md": false 00:09:23.720 }, 00:09:23.720 "memory_domains": [ 00:09:23.720 { 00:09:23.720 "dma_device_id": "system", 00:09:23.720 "dma_device_type": 1 00:09:23.720 }, 00:09:23.720 { 00:09:23.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.720 "dma_device_type": 2 00:09:23.720 } 00:09:23.720 ], 00:09:23.720 "driver_specific": { 00:09:23.720 "passthru": { 00:09:23.720 "name": "pt2", 00:09:23.720 "base_bdev_name": "malloc2" 00:09:23.720 } 00:09:23.720 } 00:09:23.720 }' 00:09:23.720 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:23.720 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:23.720 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:23.720 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.720 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.720 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:23.720 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:23.720 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:23.979 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:23.979 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:23.979 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:23.979 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:23.979 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:23.979 11:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:09:24.236 [2024-07-25 11:20:39.998974] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.236 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=76c26406-f669-494a-b2b3-2499069dd40b 00:09:24.236 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 76c26406-f669-494a-b2b3-2499069dd40b ']' 00:09:24.236 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:24.494 [2024-07-25 11:20:40.274710] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.494 [2024-07-25 11:20:40.274755] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.494 [2024-07-25 11:20:40.274856] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.494 [2024-07-25 11:20:40.274949] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.494 [2024-07-25 11:20:40.274965] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:24.494 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.494 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:09:24.751 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:09:24.751 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:09:24.751 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:09:24.751 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:25.009 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.009 11:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:25.266 11:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:25.266 11:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:25.524 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:09:25.783 [2024-07-25 11:20:41.618984] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:25.783 [2024-07-25 11:20:41.621313] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:25.783 [2024-07-25 11:20:41.621411] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:25.783 [2024-07-25 11:20:41.621482] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:25.783 [2024-07-25 11:20:41.621512] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.783 [2024-07-25 11:20:41.621526] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:25.783 request: 00:09:25.783 { 00:09:25.783 "name": "raid_bdev1", 00:09:25.783 "raid_level": "raid1", 00:09:25.783 "base_bdevs": [ 00:09:25.783 "malloc1", 00:09:25.783 "malloc2" 00:09:25.783 ], 00:09:25.783 "superblock": false, 00:09:25.783 "method": "bdev_raid_create", 00:09:25.783 "req_id": 1 00:09:25.783 } 00:09:25.783 Got JSON-RPC error response 00:09:25.783 response: 00:09:25.783 { 00:09:25.783 "code": -17, 00:09:25.783 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:25.783 } 00:09:25.784 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:25.784 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.784 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:25.784 11:20:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.784 11:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:09:25.784 11:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.041 11:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:09:26.041 11:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:09:26.041 11:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:26.608 [2024-07-25 11:20:42.187053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:26.608 [2024-07-25 11:20:42.187142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.608 [2024-07-25 11:20:42.187174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:26.608 [2024-07-25 11:20:42.187189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.608 [2024-07-25 11:20:42.189937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.609 [2024-07-25 11:20:42.189998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:26.609 [2024-07-25 11:20:42.190132] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:26.609 [2024-07-25 11:20:42.190195] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:26.609 pt1 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:26.609 "name": "raid_bdev1", 00:09:26.609 "uuid": "76c26406-f669-494a-b2b3-2499069dd40b", 00:09:26.609 "strip_size_kb": 0, 00:09:26.609 "state": "configuring", 00:09:26.609 "raid_level": "raid1", 00:09:26.609 "superblock": true, 00:09:26.609 "num_base_bdevs": 2, 00:09:26.609 "num_base_bdevs_discovered": 1, 00:09:26.609 "num_base_bdevs_operational": 2, 00:09:26.609 "base_bdevs_list": [ 00:09:26.609 { 00:09:26.609 "name": "pt1", 00:09:26.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.609 "is_configured": true, 00:09:26.609 "data_offset": 2048, 00:09:26.609 "data_size": 63488 00:09:26.609 }, 00:09:26.609 { 00:09:26.609 "name": null, 00:09:26.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.609 "is_configured": false, 00:09:26.609 "data_offset": 2048, 00:09:26.609 "data_size": 63488 00:09:26.609 } 00:09:26.609 ] 00:09:26.609 }' 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:26.609 11:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.174 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:09:27.174 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:09:27.174 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:09:27.174 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:27.432 [2024-07-25 11:20:43.283335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:27.432 [2024-07-25 11:20:43.283435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.432 [2024-07-25 11:20:43.283472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:27.432 [2024-07-25 11:20:43.283487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.432 [2024-07-25 11:20:43.284117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.432 [2024-07-25 11:20:43.284158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:27.432 [2024-07-25 11:20:43.284264] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:27.432 [2024-07-25 11:20:43.284303] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:27.432 [2024-07-25 11:20:43.284483] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:27.432 [2024-07-25 11:20:43.284506] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:27.432 [2024-07-25 11:20:43.284818] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:27.432 [2024-07-25 11:20:43.285010] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:27.432 [2024-07-25 11:20:43.285031] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:27.432 [2024-07-25 11:20:43.285184] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.432 pt2 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.692 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.951 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:27.951 "name": "raid_bdev1", 00:09:27.951 "uuid": "76c26406-f669-494a-b2b3-2499069dd40b", 00:09:27.951 "strip_size_kb": 0, 00:09:27.951 "state": "online", 00:09:27.951 "raid_level": "raid1", 00:09:27.951 "superblock": true, 00:09:27.951 "num_base_bdevs": 2, 00:09:27.951 "num_base_bdevs_discovered": 2, 00:09:27.951 "num_base_bdevs_operational": 2, 00:09:27.951 "base_bdevs_list": [ 00:09:27.951 { 00:09:27.951 "name": "pt1", 00:09:27.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.951 "is_configured": true, 00:09:27.951 "data_offset": 2048, 00:09:27.951 "data_size": 63488 00:09:27.951 }, 00:09:27.951 { 00:09:27.951 "name": "pt2", 00:09:27.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.951 "is_configured": true, 00:09:27.951 "data_offset": 2048, 00:09:27.951 "data_size": 63488 00:09:27.951 } 00:09:27.951 ] 00:09:27.951 }' 00:09:27.951 11:20:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:27.951 11:20:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.516 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:09:28.516 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:28.516 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:28.516 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:28.516 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:28.516 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:28.516 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:28.516 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:28.775 [2024-07-25 11:20:44.533463] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.775 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:28.775 "name": "raid_bdev1", 00:09:28.775 "aliases": [ 00:09:28.775 "76c26406-f669-494a-b2b3-2499069dd40b" 00:09:28.775 ], 00:09:28.775 "product_name": "Raid Volume", 00:09:28.775 "block_size": 512, 00:09:28.775 "num_blocks": 63488, 00:09:28.775 "uuid": "76c26406-f669-494a-b2b3-2499069dd40b", 00:09:28.775 "assigned_rate_limits": { 00:09:28.775 "rw_ios_per_sec": 0, 00:09:28.775 "rw_mbytes_per_sec": 0, 00:09:28.775 "r_mbytes_per_sec": 0, 00:09:28.775 "w_mbytes_per_sec": 0 00:09:28.775 }, 00:09:28.775 "claimed": false, 00:09:28.775 "zoned": false, 00:09:28.775 "supported_io_types": { 00:09:28.775 "read": true, 00:09:28.775 "write": true, 00:09:28.775 "unmap": false, 00:09:28.775 "flush": false, 00:09:28.775 "reset": true, 00:09:28.775 "nvme_admin": false, 00:09:28.775 "nvme_io": false, 00:09:28.775 "nvme_io_md": false, 00:09:28.775 "write_zeroes": true, 00:09:28.775 "zcopy": false, 00:09:28.775 "get_zone_info": false, 00:09:28.775 "zone_management": false, 00:09:28.775 "zone_append": false, 00:09:28.775 "compare": false, 00:09:28.775 "compare_and_write": false, 00:09:28.775 "abort": false, 00:09:28.775 "seek_hole": false, 00:09:28.775 "seek_data": false, 00:09:28.775 "copy": false, 00:09:28.775 "nvme_iov_md": false 00:09:28.775 }, 00:09:28.775 "memory_domains": [ 00:09:28.775 { 00:09:28.775 "dma_device_id": "system", 00:09:28.775 "dma_device_type": 1 00:09:28.775 }, 00:09:28.775 { 00:09:28.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.775 "dma_device_type": 2 00:09:28.775 }, 00:09:28.775 { 00:09:28.775 "dma_device_id": "system", 00:09:28.775 "dma_device_type": 1 00:09:28.775 }, 00:09:28.775 { 00:09:28.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.775 "dma_device_type": 2 00:09:28.775 } 00:09:28.775 ], 00:09:28.775 "driver_specific": { 00:09:28.775 "raid": { 00:09:28.775 "uuid": "76c26406-f669-494a-b2b3-2499069dd40b", 00:09:28.775 "strip_size_kb": 0, 00:09:28.775 "state": "online", 00:09:28.775 "raid_level": "raid1", 00:09:28.775 "superblock": true, 00:09:28.775 "num_base_bdevs": 2, 00:09:28.775 "num_base_bdevs_discovered": 2, 00:09:28.775 "num_base_bdevs_operational": 2, 00:09:28.775 "base_bdevs_list": [ 00:09:28.775 { 00:09:28.775 "name": "pt1", 00:09:28.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.775 "is_configured": true, 00:09:28.775 "data_offset": 2048, 00:09:28.775 "data_size": 63488 00:09:28.775 }, 00:09:28.775 { 00:09:28.775 "name": "pt2", 00:09:28.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.775 "is_configured": true, 00:09:28.775 "data_offset": 2048, 00:09:28.775 "data_size": 63488 00:09:28.775 } 00:09:28.775 ] 00:09:28.775 } 00:09:28.775 } 00:09:28.775 }' 00:09:28.775 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.775 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:28.775 pt2' 00:09:28.775 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:28.775 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:28.775 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:29.090 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:29.090 "name": "pt1", 00:09:29.090 "aliases": [ 00:09:29.090 "00000000-0000-0000-0000-000000000001" 00:09:29.090 ], 00:09:29.090 "product_name": "passthru", 00:09:29.090 "block_size": 512, 00:09:29.090 "num_blocks": 65536, 00:09:29.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.090 "assigned_rate_limits": { 00:09:29.090 "rw_ios_per_sec": 0, 00:09:29.090 "rw_mbytes_per_sec": 0, 00:09:29.090 "r_mbytes_per_sec": 0, 00:09:29.090 "w_mbytes_per_sec": 0 00:09:29.090 }, 00:09:29.090 "claimed": true, 00:09:29.090 "claim_type": "exclusive_write", 00:09:29.090 "zoned": false, 00:09:29.090 "supported_io_types": { 00:09:29.090 "read": true, 00:09:29.090 "write": true, 00:09:29.090 "unmap": true, 00:09:29.090 "flush": true, 00:09:29.090 "reset": true, 00:09:29.090 "nvme_admin": false, 00:09:29.090 "nvme_io": false, 00:09:29.090 "nvme_io_md": false, 00:09:29.090 "write_zeroes": true, 00:09:29.090 "zcopy": true, 00:09:29.090 "get_zone_info": false, 00:09:29.090 "zone_management": false, 00:09:29.090 "zone_append": false, 00:09:29.090 "compare": false, 00:09:29.090 "compare_and_write": false, 00:09:29.090 "abort": true, 00:09:29.090 "seek_hole": false, 00:09:29.090 "seek_data": false, 00:09:29.090 "copy": true, 00:09:29.090 "nvme_iov_md": false 00:09:29.090 }, 00:09:29.090 "memory_domains": [ 00:09:29.090 { 00:09:29.090 "dma_device_id": "system", 00:09:29.090 "dma_device_type": 1 00:09:29.090 }, 00:09:29.090 { 00:09:29.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.090 "dma_device_type": 2 00:09:29.090 } 00:09:29.090 ], 00:09:29.090 "driver_specific": { 00:09:29.090 "passthru": { 00:09:29.090 "name": "pt1", 00:09:29.090 "base_bdev_name": "malloc1" 00:09:29.090 } 00:09:29.090 } 00:09:29.090 }' 00:09:29.090 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.090 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.090 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:29.090 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.350 11:20:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.350 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:29.350 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.350 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.350 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:29.350 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:29.350 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:29.350 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:29.350 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:29.350 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:29.350 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:29.918 "name": "pt2", 00:09:29.918 "aliases": [ 00:09:29.918 "00000000-0000-0000-0000-000000000002" 00:09:29.918 ], 00:09:29.918 "product_name": "passthru", 00:09:29.918 "block_size": 512, 00:09:29.918 "num_blocks": 65536, 00:09:29.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.918 "assigned_rate_limits": { 00:09:29.918 "rw_ios_per_sec": 0, 00:09:29.918 "rw_mbytes_per_sec": 0, 00:09:29.918 "r_mbytes_per_sec": 0, 00:09:29.918 "w_mbytes_per_sec": 0 00:09:29.918 }, 00:09:29.918 "claimed": true, 00:09:29.918 "claim_type": "exclusive_write", 00:09:29.918 "zoned": false, 00:09:29.918 "supported_io_types": { 00:09:29.918 "read": true, 00:09:29.918 "write": true, 00:09:29.918 "unmap": true, 00:09:29.918 "flush": true, 00:09:29.918 "reset": true, 00:09:29.918 "nvme_admin": false, 00:09:29.918 "nvme_io": false, 00:09:29.918 "nvme_io_md": false, 00:09:29.918 "write_zeroes": true, 00:09:29.918 "zcopy": true, 00:09:29.918 "get_zone_info": false, 00:09:29.918 "zone_management": false, 00:09:29.918 "zone_append": false, 00:09:29.918 "compare": false, 00:09:29.918 "compare_and_write": false, 00:09:29.918 "abort": true, 00:09:29.918 "seek_hole": false, 00:09:29.918 "seek_data": false, 00:09:29.918 "copy": true, 00:09:29.918 "nvme_iov_md": false 00:09:29.918 }, 00:09:29.918 "memory_domains": [ 00:09:29.918 { 00:09:29.918 "dma_device_id": "system", 00:09:29.918 "dma_device_type": 1 00:09:29.918 }, 00:09:29.918 { 00:09:29.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.918 "dma_device_type": 2 00:09:29.918 } 00:09:29.918 ], 00:09:29.918 "driver_specific": { 00:09:29.918 "passthru": { 00:09:29.918 "name": "pt2", 00:09:29.918 "base_bdev_name": "malloc2" 00:09:29.918 } 00:09:29.918 } 00:09:29.918 }' 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:29.918 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:30.177 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:30.177 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:30.177 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:30.177 11:20:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:09:30.437 [2024-07-25 11:20:46.121916] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.437 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 76c26406-f669-494a-b2b3-2499069dd40b '!=' 76c26406-f669-494a-b2b3-2499069dd40b ']' 00:09:30.437 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:09:30.437 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:30.437 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:30.437 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:30.696 [2024-07-25 11:20:46.405737] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.696 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.955 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:30.955 "name": "raid_bdev1", 00:09:30.955 "uuid": "76c26406-f669-494a-b2b3-2499069dd40b", 00:09:30.955 "strip_size_kb": 0, 00:09:30.955 "state": "online", 00:09:30.955 "raid_level": "raid1", 00:09:30.955 "superblock": true, 00:09:30.955 "num_base_bdevs": 2, 00:09:30.955 "num_base_bdevs_discovered": 1, 00:09:30.955 "num_base_bdevs_operational": 1, 00:09:30.955 "base_bdevs_list": [ 00:09:30.955 { 00:09:30.955 "name": null, 00:09:30.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.955 "is_configured": false, 00:09:30.955 "data_offset": 2048, 00:09:30.955 "data_size": 63488 00:09:30.955 }, 00:09:30.955 { 00:09:30.955 "name": "pt2", 00:09:30.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.955 "is_configured": true, 00:09:30.955 "data_offset": 2048, 00:09:30.955 "data_size": 63488 00:09:30.955 } 00:09:30.955 ] 00:09:30.955 }' 00:09:30.955 11:20:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:30.955 11:20:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.893 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:31.893 [2024-07-25 11:20:47.749165] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.893 [2024-07-25 11:20:47.749215] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.893 [2024-07-25 11:20:47.749312] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.893 [2024-07-25 11:20:47.749382] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.893 [2024-07-25 11:20:47.749397] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:31.893 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.893 11:20:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:09:32.472 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:09:32.473 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:09:32.473 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:09:32.473 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:09:32.473 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:32.473 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:09:32.473 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:09:32.473 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:09:32.473 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:09:32.473 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:09:32.473 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:32.731 [2024-07-25 11:20:48.537313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:32.731 [2024-07-25 11:20:48.537408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.731 [2024-07-25 11:20:48.537442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:32.731 [2024-07-25 11:20:48.537457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.731 [2024-07-25 11:20:48.540293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.731 [2024-07-25 11:20:48.540337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:32.731 [2024-07-25 11:20:48.540479] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:32.731 [2024-07-25 11:20:48.540543] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:32.731 [2024-07-25 11:20:48.540713] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:32.731 [2024-07-25 11:20:48.540729] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.731 [2024-07-25 11:20:48.541032] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:32.731 [2024-07-25 11:20:48.541226] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:32.731 [2024-07-25 11:20:48.541249] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:32.731 [2024-07-25 11:20:48.541457] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.731 pt2 00:09:32.731 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:32.731 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:32.731 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:32.731 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:32.731 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:32.731 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:32.731 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:32.732 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:32.732 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:32.732 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:32.732 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.732 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.990 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:32.990 "name": "raid_bdev1", 00:09:32.990 "uuid": "76c26406-f669-494a-b2b3-2499069dd40b", 00:09:32.990 "strip_size_kb": 0, 00:09:32.990 "state": "online", 00:09:32.990 "raid_level": "raid1", 00:09:32.990 "superblock": true, 00:09:32.990 "num_base_bdevs": 2, 00:09:32.990 "num_base_bdevs_discovered": 1, 00:09:32.990 "num_base_bdevs_operational": 1, 00:09:32.990 "base_bdevs_list": [ 00:09:32.990 { 00:09:32.990 "name": null, 00:09:32.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.990 "is_configured": false, 00:09:32.990 "data_offset": 2048, 00:09:32.990 "data_size": 63488 00:09:32.990 }, 00:09:32.990 { 00:09:32.990 "name": "pt2", 00:09:32.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.990 "is_configured": true, 00:09:32.990 "data_offset": 2048, 00:09:32.990 "data_size": 63488 00:09:32.990 } 00:09:32.990 ] 00:09:32.990 }' 00:09:32.991 11:20:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:32.991 11:20:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.927 11:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:33.927 [2024-07-25 11:20:49.742088] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.927 [2024-07-25 11:20:49.742141] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.927 [2024-07-25 11:20:49.742243] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.927 [2024-07-25 11:20:49.742308] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.927 [2024-07-25 11:20:49.742327] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:33.927 11:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:09:33.927 11:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.185 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:09:34.185 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:09:34.185 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:09:34.185 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.445 [2024-07-25 11:20:50.297935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.445 [2024-07-25 11:20:50.298033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.445 [2024-07-25 11:20:50.298062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:34.445 [2024-07-25 11:20:50.298080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.445 [2024-07-25 11:20:50.300850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.445 [2024-07-25 11:20:50.300901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.445 [2024-07-25 11:20:50.301006] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:34.445 [2024-07-25 11:20:50.301071] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:34.445 [2024-07-25 11:20:50.301247] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:34.445 [2024-07-25 11:20:50.301268] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.445 [2024-07-25 11:20:50.301288] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:34.445 [2024-07-25 11:20:50.301365] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.445 [2024-07-25 11:20:50.301466] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:34.445 [2024-07-25 11:20:50.301486] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:34.445 [2024-07-25 11:20:50.301813] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:34.445 [2024-07-25 11:20:50.302005] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:34.445 [2024-07-25 11:20:50.302021] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:34.445 [2024-07-25 11:20:50.302241] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.445 pt1 00:09:34.704 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:09:34.704 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.705 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.963 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:34.963 "name": "raid_bdev1", 00:09:34.963 "uuid": "76c26406-f669-494a-b2b3-2499069dd40b", 00:09:34.963 "strip_size_kb": 0, 00:09:34.963 "state": "online", 00:09:34.963 "raid_level": "raid1", 00:09:34.963 "superblock": true, 00:09:34.963 "num_base_bdevs": 2, 00:09:34.963 "num_base_bdevs_discovered": 1, 00:09:34.963 "num_base_bdevs_operational": 1, 00:09:34.963 "base_bdevs_list": [ 00:09:34.963 { 00:09:34.963 "name": null, 00:09:34.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.963 "is_configured": false, 00:09:34.963 "data_offset": 2048, 00:09:34.963 "data_size": 63488 00:09:34.963 }, 00:09:34.963 { 00:09:34.963 "name": "pt2", 00:09:34.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.963 "is_configured": true, 00:09:34.963 "data_offset": 2048, 00:09:34.963 "data_size": 63488 00:09:34.963 } 00:09:34.963 ] 00:09:34.963 }' 00:09:34.963 11:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:34.963 11:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.529 11:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:35.529 11:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:35.788 11:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:09:35.788 11:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:09:35.788 11:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:36.046 [2024-07-25 11:20:51.792514] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 76c26406-f669-494a-b2b3-2499069dd40b '!=' 76c26406-f669-494a-b2b3-2499069dd40b ']' 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 66337 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66337 ']' 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66337 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66337 00:09:36.047 killing process with pid 66337 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66337' 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66337 00:09:36.047 [2024-07-25 11:20:51.848734] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.047 11:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66337 00:09:36.047 [2024-07-25 11:20:51.848853] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.047 [2024-07-25 11:20:51.848922] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.047 [2024-07-25 11:20:51.848936] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:36.304 [2024-07-25 11:20:52.038155] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.676 11:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:09:37.676 00:09:37.677 real 0m18.450s 00:09:37.677 user 0m33.173s 00:09:37.677 sys 0m2.349s 00:09:37.677 ************************************ 00:09:37.677 END TEST raid_superblock_test 00:09:37.677 ************************************ 00:09:37.677 11:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.677 11:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.677 11:20:53 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:37.677 11:20:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:37.677 11:20:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.677 11:20:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.677 ************************************ 00:09:37.677 START TEST raid_read_error_test 00:09:37.677 ************************************ 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.SBGKnORmGJ 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=66872 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 66872 /var/tmp/spdk-raid.sock 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 66872 ']' 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:37.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.677 11:20:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.677 [2024-07-25 11:20:53.438384] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:37.677 [2024-07-25 11:20:53.438556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66872 ] 00:09:37.934 [2024-07-25 11:20:53.610854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.192 [2024-07-25 11:20:53.850894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.192 [2024-07-25 11:20:54.052879] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.192 [2024-07-25 11:20:54.052957] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.779 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.779 11:20:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:38.779 11:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:09:38.779 11:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.779 BaseBdev1_malloc 00:09:38.779 11:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:39.036 true 00:09:39.036 11:20:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:39.295 [2024-07-25 11:20:55.074605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:39.295 [2024-07-25 11:20:55.074747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.295 [2024-07-25 11:20:55.074783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:39.295 [2024-07-25 11:20:55.074799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.295 [2024-07-25 11:20:55.077640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.295 [2024-07-25 11:20:55.077693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:39.295 BaseBdev1 00:09:39.295 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:09:39.295 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:39.553 BaseBdev2_malloc 00:09:39.553 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:39.812 true 00:09:39.812 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:40.072 [2024-07-25 11:20:55.906981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:40.072 [2024-07-25 11:20:55.907065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.072 [2024-07-25 11:20:55.907108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:40.072 [2024-07-25 11:20:55.907123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.072 [2024-07-25 11:20:55.909964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.072 [2024-07-25 11:20:55.910069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:40.072 BaseBdev2 00:09:40.072 11:20:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:40.331 [2024-07-25 11:20:56.183125] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.331 [2024-07-25 11:20:56.185689] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.331 [2024-07-25 11:20:56.186035] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:40.331 [2024-07-25 11:20:56.186055] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:40.331 [2024-07-25 11:20:56.186421] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:40.331 [2024-07-25 11:20:56.186663] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:40.331 [2024-07-25 11:20:56.186877] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:40.331 [2024-07-25 11:20:56.187205] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.331 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.898 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.898 "name": "raid_bdev1", 00:09:40.898 "uuid": "78d576a6-a3bc-465e-9e06-7ab57054a7ec", 00:09:40.898 "strip_size_kb": 0, 00:09:40.898 "state": "online", 00:09:40.898 "raid_level": "raid1", 00:09:40.898 "superblock": true, 00:09:40.898 "num_base_bdevs": 2, 00:09:40.898 "num_base_bdevs_discovered": 2, 00:09:40.898 "num_base_bdevs_operational": 2, 00:09:40.898 "base_bdevs_list": [ 00:09:40.898 { 00:09:40.898 "name": "BaseBdev1", 00:09:40.898 "uuid": "ffd80730-d078-5fa1-ad1c-ef94f4585703", 00:09:40.898 "is_configured": true, 00:09:40.898 "data_offset": 2048, 00:09:40.898 "data_size": 63488 00:09:40.898 }, 00:09:40.898 { 00:09:40.898 "name": "BaseBdev2", 00:09:40.898 "uuid": "0a6ba676-72ae-5d9b-8ca6-04ae282fb933", 00:09:40.898 "is_configured": true, 00:09:40.898 "data_offset": 2048, 00:09:40.898 "data_size": 63488 00:09:40.898 } 00:09:40.898 ] 00:09:40.898 }' 00:09:40.898 11:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.898 11:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.466 11:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:09:41.466 11:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:41.466 [2024-07-25 11:20:57.280941] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:42.402 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.662 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.921 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.921 "name": "raid_bdev1", 00:09:42.921 "uuid": "78d576a6-a3bc-465e-9e06-7ab57054a7ec", 00:09:42.921 "strip_size_kb": 0, 00:09:42.921 "state": "online", 00:09:42.921 "raid_level": "raid1", 00:09:42.921 "superblock": true, 00:09:42.921 "num_base_bdevs": 2, 00:09:42.921 "num_base_bdevs_discovered": 2, 00:09:42.921 "num_base_bdevs_operational": 2, 00:09:42.921 "base_bdevs_list": [ 00:09:42.921 { 00:09:42.921 "name": "BaseBdev1", 00:09:42.921 "uuid": "ffd80730-d078-5fa1-ad1c-ef94f4585703", 00:09:42.921 "is_configured": true, 00:09:42.921 "data_offset": 2048, 00:09:42.921 "data_size": 63488 00:09:42.921 }, 00:09:42.921 { 00:09:42.921 "name": "BaseBdev2", 00:09:42.921 "uuid": "0a6ba676-72ae-5d9b-8ca6-04ae282fb933", 00:09:42.921 "is_configured": true, 00:09:42.921 "data_offset": 2048, 00:09:42.921 "data_size": 63488 00:09:42.921 } 00:09:42.921 ] 00:09:42.921 }' 00:09:42.921 11:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.921 11:20:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:43.858 [2024-07-25 11:20:59.641063] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.858 [2024-07-25 11:20:59.641401] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.858 [2024-07-25 11:20:59.644763] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.858 [2024-07-25 11:20:59.644973] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.858 [2024-07-25 11:20:59.645192] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.858 [2024-07-25 11:20:59.645360] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta0 00:09:43.858 te offline 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 66872 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 66872 ']' 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 66872 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66872 00:09:43.858 killing process with pid 66872 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66872' 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 66872 00:09:43.858 11:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 66872 00:09:43.858 [2024-07-25 11:20:59.699929] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.117 [2024-07-25 11:20:59.820105] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.502 11:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.SBGKnORmGJ 00:09:45.502 11:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:09:45.502 11:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:09:45.502 11:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:09:45.502 11:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:09:45.502 11:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:45.502 11:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:45.502 11:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:45.502 00:09:45.502 real 0m7.736s 00:09:45.502 user 0m11.681s 00:09:45.502 sys 0m0.896s 00:09:45.502 11:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.502 ************************************ 00:09:45.502 END TEST raid_read_error_test 00:09:45.502 ************************************ 00:09:45.502 11:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.502 11:21:01 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:45.502 11:21:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:45.502 11:21:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.502 11:21:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.502 ************************************ 00:09:45.502 START TEST raid_write_error_test 00:09:45.502 ************************************ 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:09:45.502 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.HOFq77s5Qd 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=67060 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 67060 /var/tmp/spdk-raid.sock 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67060 ']' 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:45.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.503 11:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.503 [2024-07-25 11:21:01.209458] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:45.503 [2024-07-25 11:21:01.209609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67060 ] 00:09:45.503 [2024-07-25 11:21:01.376253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.070 [2024-07-25 11:21:01.663782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.070 [2024-07-25 11:21:01.869368] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.070 [2024-07-25 11:21:01.869413] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.328 11:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.328 11:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:46.328 11:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:09:46.328 11:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:46.895 BaseBdev1_malloc 00:09:46.895 11:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:46.895 true 00:09:46.895 11:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:47.154 [2024-07-25 11:21:03.010367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:47.154 [2024-07-25 11:21:03.010479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.154 [2024-07-25 11:21:03.010516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:47.154 [2024-07-25 11:21:03.010549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.154 [2024-07-25 11:21:03.013543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.154 [2024-07-25 11:21:03.013591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:47.154 BaseBdev1 00:09:47.154 11:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:09:47.154 11:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:47.764 BaseBdev2_malloc 00:09:47.764 11:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:47.764 true 00:09:47.764 11:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:48.023 [2024-07-25 11:21:03.819222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:48.023 [2024-07-25 11:21:03.819344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.023 [2024-07-25 11:21:03.819402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:48.023 [2024-07-25 11:21:03.819429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.023 [2024-07-25 11:21:03.823273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.023 [2024-07-25 11:21:03.823346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:48.023 BaseBdev2 00:09:48.023 11:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:48.282 [2024-07-25 11:21:04.051761] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.282 [2024-07-25 11:21:04.054225] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.282 [2024-07-25 11:21:04.054526] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:48.282 [2024-07-25 11:21:04.054565] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.282 [2024-07-25 11:21:04.054945] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:48.282 [2024-07-25 11:21:04.055191] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:48.282 [2024-07-25 11:21:04.055220] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:48.282 [2024-07-25 11:21:04.055452] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.282 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.541 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:48.541 "name": "raid_bdev1", 00:09:48.541 "uuid": "276a166b-a327-4c79-909f-b3602407343f", 00:09:48.541 "strip_size_kb": 0, 00:09:48.541 "state": "online", 00:09:48.541 "raid_level": "raid1", 00:09:48.541 "superblock": true, 00:09:48.541 "num_base_bdevs": 2, 00:09:48.541 "num_base_bdevs_discovered": 2, 00:09:48.541 "num_base_bdevs_operational": 2, 00:09:48.541 "base_bdevs_list": [ 00:09:48.541 { 00:09:48.541 "name": "BaseBdev1", 00:09:48.541 "uuid": "59c3e6f8-c0f3-541e-893d-e2a493446d6f", 00:09:48.541 "is_configured": true, 00:09:48.541 "data_offset": 2048, 00:09:48.541 "data_size": 63488 00:09:48.541 }, 00:09:48.541 { 00:09:48.541 "name": "BaseBdev2", 00:09:48.541 "uuid": "3d318b77-b8a7-5e7a-816f-30f41ef2cc9e", 00:09:48.541 "is_configured": true, 00:09:48.541 "data_offset": 2048, 00:09:48.541 "data_size": 63488 00:09:48.541 } 00:09:48.541 ] 00:09:48.541 }' 00:09:48.541 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:48.541 11:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.109 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:09:49.109 11:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:49.367 [2024-07-25 11:21:05.073406] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:50.302 11:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:50.561 [2024-07-25 11:21:06.191607] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:50.561 [2024-07-25 11:21:06.191701] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.561 [2024-07-25 11:21:06.191944] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=1 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.561 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.820 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:50.820 "name": "raid_bdev1", 00:09:50.820 "uuid": "276a166b-a327-4c79-909f-b3602407343f", 00:09:50.820 "strip_size_kb": 0, 00:09:50.820 "state": "online", 00:09:50.820 "raid_level": "raid1", 00:09:50.820 "superblock": true, 00:09:50.820 "num_base_bdevs": 2, 00:09:50.820 "num_base_bdevs_discovered": 1, 00:09:50.820 "num_base_bdevs_operational": 1, 00:09:50.820 "base_bdevs_list": [ 00:09:50.820 { 00:09:50.820 "name": null, 00:09:50.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.820 "is_configured": false, 00:09:50.820 "data_offset": 2048, 00:09:50.820 "data_size": 63488 00:09:50.820 }, 00:09:50.820 { 00:09:50.820 "name": "BaseBdev2", 00:09:50.820 "uuid": "3d318b77-b8a7-5e7a-816f-30f41ef2cc9e", 00:09:50.820 "is_configured": true, 00:09:50.820 "data_offset": 2048, 00:09:50.820 "data_size": 63488 00:09:50.820 } 00:09:50.820 ] 00:09:50.820 }' 00:09:50.820 11:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:50.820 11:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.387 11:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:51.651 [2024-07-25 11:21:07.489265] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.651 [2024-07-25 11:21:07.489640] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.651 [2024-07-25 11:21:07.493020] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.651 [2024-07-25 11:21:07.493273] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.651 [2024-07-25 11:21:07.493450] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.651 [2024-07-25 11:21:07.493611] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, sta0 00:09:51.651 te offline 00:09:51.651 11:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 67060 00:09:51.651 11:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67060 ']' 00:09:51.651 11:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67060 00:09:51.651 11:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:51.651 11:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.651 11:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67060 00:09:51.909 killing process with pid 67060 00:09:51.909 11:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.909 11:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.909 11:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67060' 00:09:51.909 11:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67060 00:09:51.909 11:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67060 00:09:51.909 [2024-07-25 11:21:07.544612] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.909 [2024-07-25 11:21:07.673733] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.287 11:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.HOFq77s5Qd 00:09:53.287 11:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:09:53.287 11:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:09:53.287 ************************************ 00:09:53.287 END TEST raid_write_error_test 00:09:53.287 ************************************ 00:09:53.287 11:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:09:53.287 11:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:09:53.287 11:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:53.287 11:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:53.287 11:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:53.287 00:09:53.287 real 0m7.838s 00:09:53.287 user 0m11.798s 00:09:53.287 sys 0m0.945s 00:09:53.287 11:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.287 11:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.287 11:21:08 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:09:53.287 11:21:08 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:09:53.287 11:21:08 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:53.287 11:21:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:53.287 11:21:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.287 11:21:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.287 ************************************ 00:09:53.287 START TEST raid_state_function_test 00:09:53.287 ************************************ 00:09:53.287 11:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:09:53.287 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:53.287 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:53.287 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:53.287 11:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:53.287 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:53.288 Process raid pid: 67250 00:09:53.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=67250 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 67250' 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 67250 /var/tmp/spdk-raid.sock 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67250 ']' 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.288 11:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.288 [2024-07-25 11:21:09.104416] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:09:53.288 [2024-07-25 11:21:09.104874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.546 [2024-07-25 11:21:09.283600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.804 [2024-07-25 11:21:09.544596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.063 [2024-07-25 11:21:09.753772] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.063 [2024-07-25 11:21:09.754044] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.321 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.321 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:54.321 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:54.579 [2024-07-25 11:21:10.363839] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.579 [2024-07-25 11:21:10.364147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.579 [2024-07-25 11:21:10.364308] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.579 [2024-07-25 11:21:10.364339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.579 [2024-07-25 11:21:10.364356] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.579 [2024-07-25 11:21:10.364369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.579 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.837 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:54.837 "name": "Existed_Raid", 00:09:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.837 "strip_size_kb": 64, 00:09:54.837 "state": "configuring", 00:09:54.837 "raid_level": "raid0", 00:09:54.837 "superblock": false, 00:09:54.837 "num_base_bdevs": 3, 00:09:54.837 "num_base_bdevs_discovered": 0, 00:09:54.837 "num_base_bdevs_operational": 3, 00:09:54.837 "base_bdevs_list": [ 00:09:54.837 { 00:09:54.837 "name": "BaseBdev1", 00:09:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.837 "is_configured": false, 00:09:54.837 "data_offset": 0, 00:09:54.837 "data_size": 0 00:09:54.837 }, 00:09:54.837 { 00:09:54.837 "name": "BaseBdev2", 00:09:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.837 "is_configured": false, 00:09:54.837 "data_offset": 0, 00:09:54.837 "data_size": 0 00:09:54.837 }, 00:09:54.837 { 00:09:54.837 "name": "BaseBdev3", 00:09:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.837 "is_configured": false, 00:09:54.838 "data_offset": 0, 00:09:54.838 "data_size": 0 00:09:54.838 } 00:09:54.838 ] 00:09:54.838 }' 00:09:54.838 11:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:54.838 11:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.770 11:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:55.770 [2024-07-25 11:21:11.567978] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.770 [2024-07-25 11:21:11.568024] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:55.770 11:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:56.027 [2024-07-25 11:21:11.792058] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.027 [2024-07-25 11:21:11.792124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.027 [2024-07-25 11:21:11.792147] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.027 [2024-07-25 11:21:11.792161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.027 [2024-07-25 11:21:11.792174] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.027 [2024-07-25 11:21:11.792185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.027 11:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.286 [2024-07-25 11:21:12.129928] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.286 BaseBdev1 00:09:56.286 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:56.286 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:56.286 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.286 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:56.286 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.286 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.286 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:56.544 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.803 [ 00:09:56.803 { 00:09:56.803 "name": "BaseBdev1", 00:09:56.803 "aliases": [ 00:09:56.803 "4f19ec59-f1b6-4148-8245-6ad1b0b35bfa" 00:09:56.803 ], 00:09:56.803 "product_name": "Malloc disk", 00:09:56.803 "block_size": 512, 00:09:56.803 "num_blocks": 65536, 00:09:56.803 "uuid": "4f19ec59-f1b6-4148-8245-6ad1b0b35bfa", 00:09:56.803 "assigned_rate_limits": { 00:09:56.803 "rw_ios_per_sec": 0, 00:09:56.803 "rw_mbytes_per_sec": 0, 00:09:56.803 "r_mbytes_per_sec": 0, 00:09:56.803 "w_mbytes_per_sec": 0 00:09:56.803 }, 00:09:56.803 "claimed": true, 00:09:56.803 "claim_type": "exclusive_write", 00:09:56.803 "zoned": false, 00:09:56.803 "supported_io_types": { 00:09:56.803 "read": true, 00:09:56.803 "write": true, 00:09:56.803 "unmap": true, 00:09:56.803 "flush": true, 00:09:56.803 "reset": true, 00:09:56.803 "nvme_admin": false, 00:09:56.803 "nvme_io": false, 00:09:56.803 "nvme_io_md": false, 00:09:56.803 "write_zeroes": true, 00:09:56.803 "zcopy": true, 00:09:56.803 "get_zone_info": false, 00:09:56.803 "zone_management": false, 00:09:56.803 "zone_append": false, 00:09:56.803 "compare": false, 00:09:56.803 "compare_and_write": false, 00:09:56.803 "abort": true, 00:09:56.803 "seek_hole": false, 00:09:56.803 "seek_data": false, 00:09:56.803 "copy": true, 00:09:56.803 "nvme_iov_md": false 00:09:56.803 }, 00:09:56.803 "memory_domains": [ 00:09:56.803 { 00:09:56.803 "dma_device_id": "system", 00:09:56.803 "dma_device_type": 1 00:09:56.803 }, 00:09:56.803 { 00:09:56.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.803 "dma_device_type": 2 00:09:56.803 } 00:09:56.803 ], 00:09:56.803 "driver_specific": {} 00:09:56.803 } 00:09:56.803 ] 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.803 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.062 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:57.062 "name": "Existed_Raid", 00:09:57.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.062 "strip_size_kb": 64, 00:09:57.062 "state": "configuring", 00:09:57.062 "raid_level": "raid0", 00:09:57.062 "superblock": false, 00:09:57.062 "num_base_bdevs": 3, 00:09:57.062 "num_base_bdevs_discovered": 1, 00:09:57.062 "num_base_bdevs_operational": 3, 00:09:57.062 "base_bdevs_list": [ 00:09:57.062 { 00:09:57.062 "name": "BaseBdev1", 00:09:57.062 "uuid": "4f19ec59-f1b6-4148-8245-6ad1b0b35bfa", 00:09:57.062 "is_configured": true, 00:09:57.062 "data_offset": 0, 00:09:57.062 "data_size": 65536 00:09:57.062 }, 00:09:57.062 { 00:09:57.062 "name": "BaseBdev2", 00:09:57.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.062 "is_configured": false, 00:09:57.062 "data_offset": 0, 00:09:57.062 "data_size": 0 00:09:57.062 }, 00:09:57.062 { 00:09:57.062 "name": "BaseBdev3", 00:09:57.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.062 "is_configured": false, 00:09:57.062 "data_offset": 0, 00:09:57.062 "data_size": 0 00:09:57.062 } 00:09:57.062 ] 00:09:57.062 }' 00:09:57.062 11:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:57.062 11:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.647 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:57.904 [2024-07-25 11:21:13.670456] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.904 [2024-07-25 11:21:13.670532] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:57.904 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:58.162 [2024-07-25 11:21:13.946580] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.162 [2024-07-25 11:21:13.949034] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.162 [2024-07-25 11:21:13.949090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.162 [2024-07-25 11:21:13.949111] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.162 [2024-07-25 11:21:13.949126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.162 11:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.420 11:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:58.420 "name": "Existed_Raid", 00:09:58.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.420 "strip_size_kb": 64, 00:09:58.420 "state": "configuring", 00:09:58.420 "raid_level": "raid0", 00:09:58.420 "superblock": false, 00:09:58.420 "num_base_bdevs": 3, 00:09:58.420 "num_base_bdevs_discovered": 1, 00:09:58.420 "num_base_bdevs_operational": 3, 00:09:58.420 "base_bdevs_list": [ 00:09:58.420 { 00:09:58.420 "name": "BaseBdev1", 00:09:58.420 "uuid": "4f19ec59-f1b6-4148-8245-6ad1b0b35bfa", 00:09:58.420 "is_configured": true, 00:09:58.420 "data_offset": 0, 00:09:58.420 "data_size": 65536 00:09:58.420 }, 00:09:58.420 { 00:09:58.420 "name": "BaseBdev2", 00:09:58.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.420 "is_configured": false, 00:09:58.420 "data_offset": 0, 00:09:58.420 "data_size": 0 00:09:58.420 }, 00:09:58.420 { 00:09:58.420 "name": "BaseBdev3", 00:09:58.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.420 "is_configured": false, 00:09:58.420 "data_offset": 0, 00:09:58.421 "data_size": 0 00:09:58.421 } 00:09:58.421 ] 00:09:58.421 }' 00:09:58.421 11:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:58.421 11:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.353 11:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.353 [2024-07-25 11:21:15.193949] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.353 BaseBdev2 00:09:59.353 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:59.353 11:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:59.353 11:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.353 11:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:59.353 11:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.353 11:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.353 11:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:59.610 11:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.868 [ 00:09:59.868 { 00:09:59.868 "name": "BaseBdev2", 00:09:59.868 "aliases": [ 00:09:59.868 "f8f84a11-5fd2-46b8-b1ae-bde1751db93b" 00:09:59.868 ], 00:09:59.868 "product_name": "Malloc disk", 00:09:59.868 "block_size": 512, 00:09:59.868 "num_blocks": 65536, 00:09:59.868 "uuid": "f8f84a11-5fd2-46b8-b1ae-bde1751db93b", 00:09:59.868 "assigned_rate_limits": { 00:09:59.868 "rw_ios_per_sec": 0, 00:09:59.868 "rw_mbytes_per_sec": 0, 00:09:59.868 "r_mbytes_per_sec": 0, 00:09:59.868 "w_mbytes_per_sec": 0 00:09:59.868 }, 00:09:59.868 "claimed": true, 00:09:59.868 "claim_type": "exclusive_write", 00:09:59.868 "zoned": false, 00:09:59.868 "supported_io_types": { 00:09:59.868 "read": true, 00:09:59.868 "write": true, 00:09:59.868 "unmap": true, 00:09:59.868 "flush": true, 00:09:59.868 "reset": true, 00:09:59.868 "nvme_admin": false, 00:09:59.868 "nvme_io": false, 00:09:59.868 "nvme_io_md": false, 00:09:59.868 "write_zeroes": true, 00:09:59.868 "zcopy": true, 00:09:59.868 "get_zone_info": false, 00:09:59.868 "zone_management": false, 00:09:59.868 "zone_append": false, 00:09:59.868 "compare": false, 00:09:59.868 "compare_and_write": false, 00:09:59.868 "abort": true, 00:09:59.868 "seek_hole": false, 00:09:59.868 "seek_data": false, 00:09:59.868 "copy": true, 00:09:59.868 "nvme_iov_md": false 00:09:59.868 }, 00:09:59.868 "memory_domains": [ 00:09:59.868 { 00:09:59.868 "dma_device_id": "system", 00:09:59.868 "dma_device_type": 1 00:09:59.868 }, 00:09:59.868 { 00:09:59.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.868 "dma_device_type": 2 00:09:59.868 } 00:09:59.868 ], 00:09:59.868 "driver_specific": {} 00:09:59.868 } 00:09:59.868 ] 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.868 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.125 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:00.125 "name": "Existed_Raid", 00:10:00.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.125 "strip_size_kb": 64, 00:10:00.125 "state": "configuring", 00:10:00.126 "raid_level": "raid0", 00:10:00.126 "superblock": false, 00:10:00.126 "num_base_bdevs": 3, 00:10:00.126 "num_base_bdevs_discovered": 2, 00:10:00.126 "num_base_bdevs_operational": 3, 00:10:00.126 "base_bdevs_list": [ 00:10:00.126 { 00:10:00.126 "name": "BaseBdev1", 00:10:00.126 "uuid": "4f19ec59-f1b6-4148-8245-6ad1b0b35bfa", 00:10:00.126 "is_configured": true, 00:10:00.126 "data_offset": 0, 00:10:00.126 "data_size": 65536 00:10:00.126 }, 00:10:00.126 { 00:10:00.126 "name": "BaseBdev2", 00:10:00.126 "uuid": "f8f84a11-5fd2-46b8-b1ae-bde1751db93b", 00:10:00.126 "is_configured": true, 00:10:00.126 "data_offset": 0, 00:10:00.126 "data_size": 65536 00:10:00.126 }, 00:10:00.126 { 00:10:00.126 "name": "BaseBdev3", 00:10:00.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.126 "is_configured": false, 00:10:00.126 "data_offset": 0, 00:10:00.126 "data_size": 0 00:10:00.126 } 00:10:00.126 ] 00:10:00.126 }' 00:10:00.126 11:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:00.126 11:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.067 11:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.325 [2024-07-25 11:21:16.992753] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.325 [2024-07-25 11:21:16.992813] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:01.325 [2024-07-25 11:21:16.992828] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:01.325 [2024-07-25 11:21:16.993170] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:01.325 [2024-07-25 11:21:16.993383] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:01.325 [2024-07-25 11:21:16.993406] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:01.325 [2024-07-25 11:21:16.993741] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.325 BaseBdev3 00:10:01.325 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:01.325 11:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:01.325 11:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.325 11:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:01.325 11:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.325 11:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.325 11:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:01.586 11:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:01.845 [ 00:10:01.845 { 00:10:01.845 "name": "BaseBdev3", 00:10:01.845 "aliases": [ 00:10:01.845 "9107e734-4eeb-4a90-8f32-b4eebac8026d" 00:10:01.845 ], 00:10:01.845 "product_name": "Malloc disk", 00:10:01.845 "block_size": 512, 00:10:01.845 "num_blocks": 65536, 00:10:01.845 "uuid": "9107e734-4eeb-4a90-8f32-b4eebac8026d", 00:10:01.845 "assigned_rate_limits": { 00:10:01.845 "rw_ios_per_sec": 0, 00:10:01.845 "rw_mbytes_per_sec": 0, 00:10:01.845 "r_mbytes_per_sec": 0, 00:10:01.845 "w_mbytes_per_sec": 0 00:10:01.845 }, 00:10:01.845 "claimed": true, 00:10:01.845 "claim_type": "exclusive_write", 00:10:01.845 "zoned": false, 00:10:01.845 "supported_io_types": { 00:10:01.845 "read": true, 00:10:01.845 "write": true, 00:10:01.845 "unmap": true, 00:10:01.845 "flush": true, 00:10:01.845 "reset": true, 00:10:01.845 "nvme_admin": false, 00:10:01.845 "nvme_io": false, 00:10:01.845 "nvme_io_md": false, 00:10:01.845 "write_zeroes": true, 00:10:01.845 "zcopy": true, 00:10:01.845 "get_zone_info": false, 00:10:01.845 "zone_management": false, 00:10:01.845 "zone_append": false, 00:10:01.845 "compare": false, 00:10:01.845 "compare_and_write": false, 00:10:01.845 "abort": true, 00:10:01.845 "seek_hole": false, 00:10:01.845 "seek_data": false, 00:10:01.845 "copy": true, 00:10:01.845 "nvme_iov_md": false 00:10:01.845 }, 00:10:01.845 "memory_domains": [ 00:10:01.845 { 00:10:01.845 "dma_device_id": "system", 00:10:01.845 "dma_device_type": 1 00:10:01.845 }, 00:10:01.845 { 00:10:01.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.845 "dma_device_type": 2 00:10:01.845 } 00:10:01.845 ], 00:10:01.845 "driver_specific": {} 00:10:01.845 } 00:10:01.845 ] 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.845 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:02.103 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:02.103 "name": "Existed_Raid", 00:10:02.103 "uuid": "3189fb35-e4f7-4b27-9a5e-a12991f7ddbd", 00:10:02.103 "strip_size_kb": 64, 00:10:02.103 "state": "online", 00:10:02.103 "raid_level": "raid0", 00:10:02.103 "superblock": false, 00:10:02.103 "num_base_bdevs": 3, 00:10:02.103 "num_base_bdevs_discovered": 3, 00:10:02.103 "num_base_bdevs_operational": 3, 00:10:02.103 "base_bdevs_list": [ 00:10:02.103 { 00:10:02.103 "name": "BaseBdev1", 00:10:02.103 "uuid": "4f19ec59-f1b6-4148-8245-6ad1b0b35bfa", 00:10:02.103 "is_configured": true, 00:10:02.103 "data_offset": 0, 00:10:02.103 "data_size": 65536 00:10:02.103 }, 00:10:02.103 { 00:10:02.103 "name": "BaseBdev2", 00:10:02.103 "uuid": "f8f84a11-5fd2-46b8-b1ae-bde1751db93b", 00:10:02.103 "is_configured": true, 00:10:02.103 "data_offset": 0, 00:10:02.103 "data_size": 65536 00:10:02.103 }, 00:10:02.103 { 00:10:02.103 "name": "BaseBdev3", 00:10:02.103 "uuid": "9107e734-4eeb-4a90-8f32-b4eebac8026d", 00:10:02.103 "is_configured": true, 00:10:02.103 "data_offset": 0, 00:10:02.103 "data_size": 65536 00:10:02.103 } 00:10:02.103 ] 00:10:02.103 }' 00:10:02.103 11:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:02.103 11:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.669 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.669 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:02.669 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:02.669 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:02.669 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:02.669 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:02.669 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:02.669 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:02.927 [2024-07-25 11:21:18.621685] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.927 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:02.927 "name": "Existed_Raid", 00:10:02.927 "aliases": [ 00:10:02.927 "3189fb35-e4f7-4b27-9a5e-a12991f7ddbd" 00:10:02.927 ], 00:10:02.927 "product_name": "Raid Volume", 00:10:02.927 "block_size": 512, 00:10:02.927 "num_blocks": 196608, 00:10:02.927 "uuid": "3189fb35-e4f7-4b27-9a5e-a12991f7ddbd", 00:10:02.927 "assigned_rate_limits": { 00:10:02.927 "rw_ios_per_sec": 0, 00:10:02.927 "rw_mbytes_per_sec": 0, 00:10:02.927 "r_mbytes_per_sec": 0, 00:10:02.927 "w_mbytes_per_sec": 0 00:10:02.927 }, 00:10:02.927 "claimed": false, 00:10:02.927 "zoned": false, 00:10:02.927 "supported_io_types": { 00:10:02.927 "read": true, 00:10:02.927 "write": true, 00:10:02.927 "unmap": true, 00:10:02.927 "flush": true, 00:10:02.927 "reset": true, 00:10:02.927 "nvme_admin": false, 00:10:02.927 "nvme_io": false, 00:10:02.927 "nvme_io_md": false, 00:10:02.927 "write_zeroes": true, 00:10:02.927 "zcopy": false, 00:10:02.927 "get_zone_info": false, 00:10:02.927 "zone_management": false, 00:10:02.927 "zone_append": false, 00:10:02.927 "compare": false, 00:10:02.927 "compare_and_write": false, 00:10:02.927 "abort": false, 00:10:02.927 "seek_hole": false, 00:10:02.927 "seek_data": false, 00:10:02.927 "copy": false, 00:10:02.927 "nvme_iov_md": false 00:10:02.927 }, 00:10:02.927 "memory_domains": [ 00:10:02.927 { 00:10:02.927 "dma_device_id": "system", 00:10:02.927 "dma_device_type": 1 00:10:02.927 }, 00:10:02.927 { 00:10:02.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.927 "dma_device_type": 2 00:10:02.927 }, 00:10:02.927 { 00:10:02.927 "dma_device_id": "system", 00:10:02.927 "dma_device_type": 1 00:10:02.927 }, 00:10:02.927 { 00:10:02.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.927 "dma_device_type": 2 00:10:02.927 }, 00:10:02.927 { 00:10:02.927 "dma_device_id": "system", 00:10:02.927 "dma_device_type": 1 00:10:02.927 }, 00:10:02.927 { 00:10:02.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.927 "dma_device_type": 2 00:10:02.927 } 00:10:02.927 ], 00:10:02.927 "driver_specific": { 00:10:02.927 "raid": { 00:10:02.927 "uuid": "3189fb35-e4f7-4b27-9a5e-a12991f7ddbd", 00:10:02.927 "strip_size_kb": 64, 00:10:02.927 "state": "online", 00:10:02.927 "raid_level": "raid0", 00:10:02.927 "superblock": false, 00:10:02.927 "num_base_bdevs": 3, 00:10:02.927 "num_base_bdevs_discovered": 3, 00:10:02.927 "num_base_bdevs_operational": 3, 00:10:02.927 "base_bdevs_list": [ 00:10:02.927 { 00:10:02.927 "name": "BaseBdev1", 00:10:02.927 "uuid": "4f19ec59-f1b6-4148-8245-6ad1b0b35bfa", 00:10:02.927 "is_configured": true, 00:10:02.927 "data_offset": 0, 00:10:02.927 "data_size": 65536 00:10:02.927 }, 00:10:02.927 { 00:10:02.927 "name": "BaseBdev2", 00:10:02.927 "uuid": "f8f84a11-5fd2-46b8-b1ae-bde1751db93b", 00:10:02.927 "is_configured": true, 00:10:02.927 "data_offset": 0, 00:10:02.927 "data_size": 65536 00:10:02.927 }, 00:10:02.927 { 00:10:02.927 "name": "BaseBdev3", 00:10:02.927 "uuid": "9107e734-4eeb-4a90-8f32-b4eebac8026d", 00:10:02.927 "is_configured": true, 00:10:02.927 "data_offset": 0, 00:10:02.927 "data_size": 65536 00:10:02.927 } 00:10:02.927 ] 00:10:02.927 } 00:10:02.927 } 00:10:02.927 }' 00:10:02.927 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.927 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:02.927 BaseBdev2 00:10:02.927 BaseBdev3' 00:10:02.927 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:02.927 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:02.927 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:03.185 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:03.185 "name": "BaseBdev1", 00:10:03.185 "aliases": [ 00:10:03.185 "4f19ec59-f1b6-4148-8245-6ad1b0b35bfa" 00:10:03.185 ], 00:10:03.185 "product_name": "Malloc disk", 00:10:03.185 "block_size": 512, 00:10:03.185 "num_blocks": 65536, 00:10:03.185 "uuid": "4f19ec59-f1b6-4148-8245-6ad1b0b35bfa", 00:10:03.185 "assigned_rate_limits": { 00:10:03.185 "rw_ios_per_sec": 0, 00:10:03.185 "rw_mbytes_per_sec": 0, 00:10:03.185 "r_mbytes_per_sec": 0, 00:10:03.185 "w_mbytes_per_sec": 0 00:10:03.185 }, 00:10:03.185 "claimed": true, 00:10:03.185 "claim_type": "exclusive_write", 00:10:03.185 "zoned": false, 00:10:03.185 "supported_io_types": { 00:10:03.185 "read": true, 00:10:03.185 "write": true, 00:10:03.185 "unmap": true, 00:10:03.185 "flush": true, 00:10:03.185 "reset": true, 00:10:03.185 "nvme_admin": false, 00:10:03.185 "nvme_io": false, 00:10:03.185 "nvme_io_md": false, 00:10:03.185 "write_zeroes": true, 00:10:03.185 "zcopy": true, 00:10:03.185 "get_zone_info": false, 00:10:03.185 "zone_management": false, 00:10:03.185 "zone_append": false, 00:10:03.185 "compare": false, 00:10:03.185 "compare_and_write": false, 00:10:03.185 "abort": true, 00:10:03.185 "seek_hole": false, 00:10:03.185 "seek_data": false, 00:10:03.185 "copy": true, 00:10:03.185 "nvme_iov_md": false 00:10:03.185 }, 00:10:03.185 "memory_domains": [ 00:10:03.185 { 00:10:03.185 "dma_device_id": "system", 00:10:03.185 "dma_device_type": 1 00:10:03.185 }, 00:10:03.185 { 00:10:03.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.185 "dma_device_type": 2 00:10:03.185 } 00:10:03.185 ], 00:10:03.185 "driver_specific": {} 00:10:03.185 }' 00:10:03.185 11:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:03.185 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:03.442 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:03.442 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.442 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.442 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:03.442 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:03.442 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:03.442 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:03.442 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:03.443 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:03.700 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:03.700 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:03.700 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:03.700 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:03.958 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:03.958 "name": "BaseBdev2", 00:10:03.958 "aliases": [ 00:10:03.958 "f8f84a11-5fd2-46b8-b1ae-bde1751db93b" 00:10:03.958 ], 00:10:03.958 "product_name": "Malloc disk", 00:10:03.958 "block_size": 512, 00:10:03.958 "num_blocks": 65536, 00:10:03.958 "uuid": "f8f84a11-5fd2-46b8-b1ae-bde1751db93b", 00:10:03.958 "assigned_rate_limits": { 00:10:03.958 "rw_ios_per_sec": 0, 00:10:03.958 "rw_mbytes_per_sec": 0, 00:10:03.958 "r_mbytes_per_sec": 0, 00:10:03.958 "w_mbytes_per_sec": 0 00:10:03.958 }, 00:10:03.958 "claimed": true, 00:10:03.958 "claim_type": "exclusive_write", 00:10:03.958 "zoned": false, 00:10:03.958 "supported_io_types": { 00:10:03.958 "read": true, 00:10:03.958 "write": true, 00:10:03.958 "unmap": true, 00:10:03.958 "flush": true, 00:10:03.958 "reset": true, 00:10:03.958 "nvme_admin": false, 00:10:03.958 "nvme_io": false, 00:10:03.958 "nvme_io_md": false, 00:10:03.958 "write_zeroes": true, 00:10:03.958 "zcopy": true, 00:10:03.958 "get_zone_info": false, 00:10:03.958 "zone_management": false, 00:10:03.958 "zone_append": false, 00:10:03.958 "compare": false, 00:10:03.958 "compare_and_write": false, 00:10:03.958 "abort": true, 00:10:03.958 "seek_hole": false, 00:10:03.958 "seek_data": false, 00:10:03.958 "copy": true, 00:10:03.958 "nvme_iov_md": false 00:10:03.958 }, 00:10:03.958 "memory_domains": [ 00:10:03.958 { 00:10:03.958 "dma_device_id": "system", 00:10:03.958 "dma_device_type": 1 00:10:03.958 }, 00:10:03.958 { 00:10:03.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.958 "dma_device_type": 2 00:10:03.958 } 00:10:03.958 ], 00:10:03.958 "driver_specific": {} 00:10:03.958 }' 00:10:03.958 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:03.958 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:03.958 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:03.958 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.958 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.958 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:03.958 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.216 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.216 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:04.216 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.216 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.216 11:21:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:04.216 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:04.216 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:04.216 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:04.476 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:04.476 "name": "BaseBdev3", 00:10:04.476 "aliases": [ 00:10:04.476 "9107e734-4eeb-4a90-8f32-b4eebac8026d" 00:10:04.476 ], 00:10:04.476 "product_name": "Malloc disk", 00:10:04.476 "block_size": 512, 00:10:04.476 "num_blocks": 65536, 00:10:04.476 "uuid": "9107e734-4eeb-4a90-8f32-b4eebac8026d", 00:10:04.476 "assigned_rate_limits": { 00:10:04.476 "rw_ios_per_sec": 0, 00:10:04.476 "rw_mbytes_per_sec": 0, 00:10:04.476 "r_mbytes_per_sec": 0, 00:10:04.476 "w_mbytes_per_sec": 0 00:10:04.476 }, 00:10:04.476 "claimed": true, 00:10:04.476 "claim_type": "exclusive_write", 00:10:04.476 "zoned": false, 00:10:04.476 "supported_io_types": { 00:10:04.476 "read": true, 00:10:04.476 "write": true, 00:10:04.476 "unmap": true, 00:10:04.476 "flush": true, 00:10:04.476 "reset": true, 00:10:04.476 "nvme_admin": false, 00:10:04.476 "nvme_io": false, 00:10:04.476 "nvme_io_md": false, 00:10:04.476 "write_zeroes": true, 00:10:04.476 "zcopy": true, 00:10:04.476 "get_zone_info": false, 00:10:04.476 "zone_management": false, 00:10:04.476 "zone_append": false, 00:10:04.476 "compare": false, 00:10:04.476 "compare_and_write": false, 00:10:04.476 "abort": true, 00:10:04.476 "seek_hole": false, 00:10:04.476 "seek_data": false, 00:10:04.476 "copy": true, 00:10:04.476 "nvme_iov_md": false 00:10:04.476 }, 00:10:04.476 "memory_domains": [ 00:10:04.476 { 00:10:04.476 "dma_device_id": "system", 00:10:04.476 "dma_device_type": 1 00:10:04.476 }, 00:10:04.476 { 00:10:04.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.476 "dma_device_type": 2 00:10:04.476 } 00:10:04.476 ], 00:10:04.476 "driver_specific": {} 00:10:04.476 }' 00:10:04.476 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.476 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.734 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:04.734 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.734 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.734 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:04.734 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.734 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.734 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:04.734 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.991 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.991 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:04.991 11:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:05.248 [2024-07-25 11:21:20.953992] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.248 [2024-07-25 11:21:20.954044] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.248 [2024-07-25 11:21:20.954113] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.248 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.505 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:05.505 "name": "Existed_Raid", 00:10:05.505 "uuid": "3189fb35-e4f7-4b27-9a5e-a12991f7ddbd", 00:10:05.505 "strip_size_kb": 64, 00:10:05.505 "state": "offline", 00:10:05.505 "raid_level": "raid0", 00:10:05.505 "superblock": false, 00:10:05.505 "num_base_bdevs": 3, 00:10:05.505 "num_base_bdevs_discovered": 2, 00:10:05.505 "num_base_bdevs_operational": 2, 00:10:05.505 "base_bdevs_list": [ 00:10:05.505 { 00:10:05.505 "name": null, 00:10:05.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.505 "is_configured": false, 00:10:05.505 "data_offset": 0, 00:10:05.505 "data_size": 65536 00:10:05.505 }, 00:10:05.505 { 00:10:05.505 "name": "BaseBdev2", 00:10:05.505 "uuid": "f8f84a11-5fd2-46b8-b1ae-bde1751db93b", 00:10:05.505 "is_configured": true, 00:10:05.505 "data_offset": 0, 00:10:05.505 "data_size": 65536 00:10:05.505 }, 00:10:05.505 { 00:10:05.505 "name": "BaseBdev3", 00:10:05.505 "uuid": "9107e734-4eeb-4a90-8f32-b4eebac8026d", 00:10:05.505 "is_configured": true, 00:10:05.505 "data_offset": 0, 00:10:05.505 "data_size": 65536 00:10:05.505 } 00:10:05.505 ] 00:10:05.505 }' 00:10:05.505 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:05.505 11:21:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.439 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:06.439 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:06.439 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:06.439 11:21:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.439 11:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:06.439 11:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.439 11:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:06.697 [2024-07-25 11:21:22.430083] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.697 11:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:06.697 11:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:06.697 11:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.697 11:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:06.956 11:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:06.956 11:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.956 11:21:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:07.214 [2024-07-25 11:21:22.995904] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.215 [2024-07-25 11:21:22.995991] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:07.485 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:07.485 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:07.485 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.485 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:07.485 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:07.485 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:07.485 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:07.485 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:07.485 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:07.485 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.761 BaseBdev2 00:10:08.019 11:21:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:08.019 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:08.019 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.019 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:08.019 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.019 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.019 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:08.019 11:21:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.276 [ 00:10:08.276 { 00:10:08.276 "name": "BaseBdev2", 00:10:08.276 "aliases": [ 00:10:08.276 "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57" 00:10:08.276 ], 00:10:08.276 "product_name": "Malloc disk", 00:10:08.276 "block_size": 512, 00:10:08.276 "num_blocks": 65536, 00:10:08.276 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:08.276 "assigned_rate_limits": { 00:10:08.276 "rw_ios_per_sec": 0, 00:10:08.276 "rw_mbytes_per_sec": 0, 00:10:08.276 "r_mbytes_per_sec": 0, 00:10:08.276 "w_mbytes_per_sec": 0 00:10:08.276 }, 00:10:08.276 "claimed": false, 00:10:08.276 "zoned": false, 00:10:08.276 "supported_io_types": { 00:10:08.276 "read": true, 00:10:08.276 "write": true, 00:10:08.276 "unmap": true, 00:10:08.276 "flush": true, 00:10:08.276 "reset": true, 00:10:08.276 "nvme_admin": false, 00:10:08.276 "nvme_io": false, 00:10:08.276 "nvme_io_md": false, 00:10:08.276 "write_zeroes": true, 00:10:08.276 "zcopy": true, 00:10:08.276 "get_zone_info": false, 00:10:08.276 "zone_management": false, 00:10:08.276 "zone_append": false, 00:10:08.276 "compare": false, 00:10:08.276 "compare_and_write": false, 00:10:08.276 "abort": true, 00:10:08.276 "seek_hole": false, 00:10:08.276 "seek_data": false, 00:10:08.276 "copy": true, 00:10:08.276 "nvme_iov_md": false 00:10:08.276 }, 00:10:08.276 "memory_domains": [ 00:10:08.277 { 00:10:08.277 "dma_device_id": "system", 00:10:08.277 "dma_device_type": 1 00:10:08.277 }, 00:10:08.277 { 00:10:08.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.277 "dma_device_type": 2 00:10:08.277 } 00:10:08.277 ], 00:10:08.277 "driver_specific": {} 00:10:08.277 } 00:10:08.277 ] 00:10:08.277 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:08.277 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:08.277 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:08.277 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:08.535 BaseBdev3 00:10:08.535 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:08.535 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:08.535 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.535 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:08.535 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.535 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.535 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:08.793 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.051 [ 00:10:09.051 { 00:10:09.051 "name": "BaseBdev3", 00:10:09.051 "aliases": [ 00:10:09.051 "05b6d08a-9adf-4b69-b4ac-7661c8c58519" 00:10:09.051 ], 00:10:09.051 "product_name": "Malloc disk", 00:10:09.051 "block_size": 512, 00:10:09.051 "num_blocks": 65536, 00:10:09.051 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:09.051 "assigned_rate_limits": { 00:10:09.051 "rw_ios_per_sec": 0, 00:10:09.051 "rw_mbytes_per_sec": 0, 00:10:09.051 "r_mbytes_per_sec": 0, 00:10:09.051 "w_mbytes_per_sec": 0 00:10:09.051 }, 00:10:09.051 "claimed": false, 00:10:09.051 "zoned": false, 00:10:09.051 "supported_io_types": { 00:10:09.051 "read": true, 00:10:09.051 "write": true, 00:10:09.051 "unmap": true, 00:10:09.051 "flush": true, 00:10:09.051 "reset": true, 00:10:09.051 "nvme_admin": false, 00:10:09.051 "nvme_io": false, 00:10:09.051 "nvme_io_md": false, 00:10:09.051 "write_zeroes": true, 00:10:09.051 "zcopy": true, 00:10:09.051 "get_zone_info": false, 00:10:09.051 "zone_management": false, 00:10:09.051 "zone_append": false, 00:10:09.051 "compare": false, 00:10:09.051 "compare_and_write": false, 00:10:09.051 "abort": true, 00:10:09.051 "seek_hole": false, 00:10:09.051 "seek_data": false, 00:10:09.051 "copy": true, 00:10:09.051 "nvme_iov_md": false 00:10:09.051 }, 00:10:09.051 "memory_domains": [ 00:10:09.051 { 00:10:09.051 "dma_device_id": "system", 00:10:09.051 "dma_device_type": 1 00:10:09.051 }, 00:10:09.051 { 00:10:09.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.051 "dma_device_type": 2 00:10:09.051 } 00:10:09.051 ], 00:10:09.051 "driver_specific": {} 00:10:09.051 } 00:10:09.051 ] 00:10:09.051 11:21:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:09.051 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:09.051 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:09.051 11:21:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:09.310 [2024-07-25 11:21:25.072066] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.310 [2024-07-25 11:21:25.072144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.310 [2024-07-25 11:21:25.072208] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.310 [2024-07-25 11:21:25.074505] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.310 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.568 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:09.568 "name": "Existed_Raid", 00:10:09.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.568 "strip_size_kb": 64, 00:10:09.568 "state": "configuring", 00:10:09.568 "raid_level": "raid0", 00:10:09.568 "superblock": false, 00:10:09.569 "num_base_bdevs": 3, 00:10:09.569 "num_base_bdevs_discovered": 2, 00:10:09.569 "num_base_bdevs_operational": 3, 00:10:09.569 "base_bdevs_list": [ 00:10:09.569 { 00:10:09.569 "name": "BaseBdev1", 00:10:09.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.569 "is_configured": false, 00:10:09.569 "data_offset": 0, 00:10:09.569 "data_size": 0 00:10:09.569 }, 00:10:09.569 { 00:10:09.569 "name": "BaseBdev2", 00:10:09.569 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:09.569 "is_configured": true, 00:10:09.569 "data_offset": 0, 00:10:09.569 "data_size": 65536 00:10:09.569 }, 00:10:09.569 { 00:10:09.569 "name": "BaseBdev3", 00:10:09.569 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:09.569 "is_configured": true, 00:10:09.569 "data_offset": 0, 00:10:09.569 "data_size": 65536 00:10:09.569 } 00:10:09.569 ] 00:10:09.569 }' 00:10:09.569 11:21:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:09.569 11:21:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.135 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:10.393 [2024-07-25 11:21:26.212326] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.393 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.960 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:10.960 "name": "Existed_Raid", 00:10:10.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.960 "strip_size_kb": 64, 00:10:10.960 "state": "configuring", 00:10:10.960 "raid_level": "raid0", 00:10:10.960 "superblock": false, 00:10:10.960 "num_base_bdevs": 3, 00:10:10.960 "num_base_bdevs_discovered": 1, 00:10:10.960 "num_base_bdevs_operational": 3, 00:10:10.960 "base_bdevs_list": [ 00:10:10.960 { 00:10:10.960 "name": "BaseBdev1", 00:10:10.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.960 "is_configured": false, 00:10:10.960 "data_offset": 0, 00:10:10.960 "data_size": 0 00:10:10.960 }, 00:10:10.960 { 00:10:10.960 "name": null, 00:10:10.960 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:10.960 "is_configured": false, 00:10:10.960 "data_offset": 0, 00:10:10.960 "data_size": 65536 00:10:10.960 }, 00:10:10.960 { 00:10:10.960 "name": "BaseBdev3", 00:10:10.960 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:10.960 "is_configured": true, 00:10:10.960 "data_offset": 0, 00:10:10.960 "data_size": 65536 00:10:10.960 } 00:10:10.960 ] 00:10:10.960 }' 00:10:10.960 11:21:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:10.960 11:21:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.528 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:11.528 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.786 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:11.786 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.045 [2024-07-25 11:21:27.701238] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.045 BaseBdev1 00:10:12.045 11:21:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:12.045 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:12.045 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.045 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:12.045 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.045 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.045 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:12.303 11:21:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.561 [ 00:10:12.561 { 00:10:12.561 "name": "BaseBdev1", 00:10:12.561 "aliases": [ 00:10:12.561 "e72f7402-2c60-4fca-a356-abeef6e9fecd" 00:10:12.561 ], 00:10:12.561 "product_name": "Malloc disk", 00:10:12.561 "block_size": 512, 00:10:12.561 "num_blocks": 65536, 00:10:12.561 "uuid": "e72f7402-2c60-4fca-a356-abeef6e9fecd", 00:10:12.561 "assigned_rate_limits": { 00:10:12.561 "rw_ios_per_sec": 0, 00:10:12.561 "rw_mbytes_per_sec": 0, 00:10:12.561 "r_mbytes_per_sec": 0, 00:10:12.561 "w_mbytes_per_sec": 0 00:10:12.561 }, 00:10:12.562 "claimed": true, 00:10:12.562 "claim_type": "exclusive_write", 00:10:12.562 "zoned": false, 00:10:12.562 "supported_io_types": { 00:10:12.562 "read": true, 00:10:12.562 "write": true, 00:10:12.562 "unmap": true, 00:10:12.562 "flush": true, 00:10:12.562 "reset": true, 00:10:12.562 "nvme_admin": false, 00:10:12.562 "nvme_io": false, 00:10:12.562 "nvme_io_md": false, 00:10:12.562 "write_zeroes": true, 00:10:12.562 "zcopy": true, 00:10:12.562 "get_zone_info": false, 00:10:12.562 "zone_management": false, 00:10:12.562 "zone_append": false, 00:10:12.562 "compare": false, 00:10:12.562 "compare_and_write": false, 00:10:12.562 "abort": true, 00:10:12.562 "seek_hole": false, 00:10:12.562 "seek_data": false, 00:10:12.562 "copy": true, 00:10:12.562 "nvme_iov_md": false 00:10:12.562 }, 00:10:12.562 "memory_domains": [ 00:10:12.562 { 00:10:12.562 "dma_device_id": "system", 00:10:12.562 "dma_device_type": 1 00:10:12.562 }, 00:10:12.562 { 00:10:12.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.562 "dma_device_type": 2 00:10:12.562 } 00:10:12.562 ], 00:10:12.562 "driver_specific": {} 00:10:12.562 } 00:10:12.562 ] 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.562 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.819 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.819 "name": "Existed_Raid", 00:10:12.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.819 "strip_size_kb": 64, 00:10:12.819 "state": "configuring", 00:10:12.819 "raid_level": "raid0", 00:10:12.819 "superblock": false, 00:10:12.819 "num_base_bdevs": 3, 00:10:12.819 "num_base_bdevs_discovered": 2, 00:10:12.819 "num_base_bdevs_operational": 3, 00:10:12.819 "base_bdevs_list": [ 00:10:12.819 { 00:10:12.819 "name": "BaseBdev1", 00:10:12.819 "uuid": "e72f7402-2c60-4fca-a356-abeef6e9fecd", 00:10:12.819 "is_configured": true, 00:10:12.819 "data_offset": 0, 00:10:12.819 "data_size": 65536 00:10:12.819 }, 00:10:12.819 { 00:10:12.819 "name": null, 00:10:12.819 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:12.819 "is_configured": false, 00:10:12.819 "data_offset": 0, 00:10:12.819 "data_size": 65536 00:10:12.819 }, 00:10:12.819 { 00:10:12.819 "name": "BaseBdev3", 00:10:12.819 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:12.819 "is_configured": true, 00:10:12.819 "data_offset": 0, 00:10:12.819 "data_size": 65536 00:10:12.819 } 00:10:12.819 ] 00:10:12.819 }' 00:10:12.819 11:21:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.819 11:21:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.385 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:13.385 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.644 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:13.644 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:13.903 [2024-07-25 11:21:29.669930] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:13.903 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.205 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:14.205 "name": "Existed_Raid", 00:10:14.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.205 "strip_size_kb": 64, 00:10:14.205 "state": "configuring", 00:10:14.205 "raid_level": "raid0", 00:10:14.205 "superblock": false, 00:10:14.205 "num_base_bdevs": 3, 00:10:14.205 "num_base_bdevs_discovered": 1, 00:10:14.205 "num_base_bdevs_operational": 3, 00:10:14.205 "base_bdevs_list": [ 00:10:14.205 { 00:10:14.205 "name": "BaseBdev1", 00:10:14.205 "uuid": "e72f7402-2c60-4fca-a356-abeef6e9fecd", 00:10:14.205 "is_configured": true, 00:10:14.205 "data_offset": 0, 00:10:14.205 "data_size": 65536 00:10:14.205 }, 00:10:14.205 { 00:10:14.205 "name": null, 00:10:14.205 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:14.205 "is_configured": false, 00:10:14.205 "data_offset": 0, 00:10:14.205 "data_size": 65536 00:10:14.205 }, 00:10:14.205 { 00:10:14.205 "name": null, 00:10:14.205 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:14.205 "is_configured": false, 00:10:14.205 "data_offset": 0, 00:10:14.205 "data_size": 65536 00:10:14.205 } 00:10:14.205 ] 00:10:14.205 }' 00:10:14.205 11:21:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:14.205 11:21:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.771 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.771 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.029 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:15.029 11:21:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:15.287 [2024-07-25 11:21:31.154331] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.546 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.804 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:15.805 "name": "Existed_Raid", 00:10:15.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.805 "strip_size_kb": 64, 00:10:15.805 "state": "configuring", 00:10:15.805 "raid_level": "raid0", 00:10:15.805 "superblock": false, 00:10:15.805 "num_base_bdevs": 3, 00:10:15.805 "num_base_bdevs_discovered": 2, 00:10:15.805 "num_base_bdevs_operational": 3, 00:10:15.805 "base_bdevs_list": [ 00:10:15.805 { 00:10:15.805 "name": "BaseBdev1", 00:10:15.805 "uuid": "e72f7402-2c60-4fca-a356-abeef6e9fecd", 00:10:15.805 "is_configured": true, 00:10:15.805 "data_offset": 0, 00:10:15.805 "data_size": 65536 00:10:15.805 }, 00:10:15.805 { 00:10:15.805 "name": null, 00:10:15.805 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:15.805 "is_configured": false, 00:10:15.805 "data_offset": 0, 00:10:15.805 "data_size": 65536 00:10:15.805 }, 00:10:15.805 { 00:10:15.805 "name": "BaseBdev3", 00:10:15.805 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:15.805 "is_configured": true, 00:10:15.805 "data_offset": 0, 00:10:15.805 "data_size": 65536 00:10:15.805 } 00:10:15.805 ] 00:10:15.805 }' 00:10:15.805 11:21:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:15.805 11:21:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.371 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.371 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:16.629 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:16.629 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:16.888 [2024-07-25 11:21:32.558783] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.888 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.146 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:17.146 "name": "Existed_Raid", 00:10:17.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.146 "strip_size_kb": 64, 00:10:17.146 "state": "configuring", 00:10:17.146 "raid_level": "raid0", 00:10:17.146 "superblock": false, 00:10:17.146 "num_base_bdevs": 3, 00:10:17.146 "num_base_bdevs_discovered": 1, 00:10:17.146 "num_base_bdevs_operational": 3, 00:10:17.146 "base_bdevs_list": [ 00:10:17.146 { 00:10:17.146 "name": null, 00:10:17.146 "uuid": "e72f7402-2c60-4fca-a356-abeef6e9fecd", 00:10:17.146 "is_configured": false, 00:10:17.146 "data_offset": 0, 00:10:17.146 "data_size": 65536 00:10:17.146 }, 00:10:17.146 { 00:10:17.146 "name": null, 00:10:17.146 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:17.146 "is_configured": false, 00:10:17.146 "data_offset": 0, 00:10:17.146 "data_size": 65536 00:10:17.146 }, 00:10:17.146 { 00:10:17.146 "name": "BaseBdev3", 00:10:17.146 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:17.146 "is_configured": true, 00:10:17.146 "data_offset": 0, 00:10:17.146 "data_size": 65536 00:10:17.146 } 00:10:17.146 ] 00:10:17.146 }' 00:10:17.146 11:21:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:17.146 11:21:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.103 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.103 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.103 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:18.103 11:21:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:18.361 [2024-07-25 11:21:34.209224] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.361 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.619 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.619 "name": "Existed_Raid", 00:10:18.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.619 "strip_size_kb": 64, 00:10:18.619 "state": "configuring", 00:10:18.619 "raid_level": "raid0", 00:10:18.619 "superblock": false, 00:10:18.619 "num_base_bdevs": 3, 00:10:18.619 "num_base_bdevs_discovered": 2, 00:10:18.619 "num_base_bdevs_operational": 3, 00:10:18.619 "base_bdevs_list": [ 00:10:18.619 { 00:10:18.619 "name": null, 00:10:18.619 "uuid": "e72f7402-2c60-4fca-a356-abeef6e9fecd", 00:10:18.619 "is_configured": false, 00:10:18.619 "data_offset": 0, 00:10:18.619 "data_size": 65536 00:10:18.619 }, 00:10:18.619 { 00:10:18.619 "name": "BaseBdev2", 00:10:18.619 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:18.619 "is_configured": true, 00:10:18.619 "data_offset": 0, 00:10:18.619 "data_size": 65536 00:10:18.619 }, 00:10:18.619 { 00:10:18.619 "name": "BaseBdev3", 00:10:18.619 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:18.619 "is_configured": true, 00:10:18.619 "data_offset": 0, 00:10:18.619 "data_size": 65536 00:10:18.619 } 00:10:18.619 ] 00:10:18.619 }' 00:10:18.619 11:21:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.619 11:21:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.551 11:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.551 11:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.809 11:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:19.809 11:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.809 11:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:20.066 11:21:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e72f7402-2c60-4fca-a356-abeef6e9fecd 00:10:20.324 [2024-07-25 11:21:36.005441] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:20.324 [2024-07-25 11:21:36.005509] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.324 [2024-07-25 11:21:36.005522] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:20.324 [2024-07-25 11:21:36.005896] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:20.324 [2024-07-25 11:21:36.006078] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.324 [2024-07-25 11:21:36.006099] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:20.324 NewBaseBdev 00:10:20.324 [2024-07-25 11:21:36.006394] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.324 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:20.324 11:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:20.324 11:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.324 11:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:20.324 11:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.324 11:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.324 11:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:20.582 11:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:20.839 [ 00:10:20.839 { 00:10:20.839 "name": "NewBaseBdev", 00:10:20.839 "aliases": [ 00:10:20.839 "e72f7402-2c60-4fca-a356-abeef6e9fecd" 00:10:20.839 ], 00:10:20.839 "product_name": "Malloc disk", 00:10:20.839 "block_size": 512, 00:10:20.839 "num_blocks": 65536, 00:10:20.839 "uuid": "e72f7402-2c60-4fca-a356-abeef6e9fecd", 00:10:20.839 "assigned_rate_limits": { 00:10:20.839 "rw_ios_per_sec": 0, 00:10:20.839 "rw_mbytes_per_sec": 0, 00:10:20.839 "r_mbytes_per_sec": 0, 00:10:20.839 "w_mbytes_per_sec": 0 00:10:20.839 }, 00:10:20.840 "claimed": true, 00:10:20.840 "claim_type": "exclusive_write", 00:10:20.840 "zoned": false, 00:10:20.840 "supported_io_types": { 00:10:20.840 "read": true, 00:10:20.840 "write": true, 00:10:20.840 "unmap": true, 00:10:20.840 "flush": true, 00:10:20.840 "reset": true, 00:10:20.840 "nvme_admin": false, 00:10:20.840 "nvme_io": false, 00:10:20.840 "nvme_io_md": false, 00:10:20.840 "write_zeroes": true, 00:10:20.840 "zcopy": true, 00:10:20.840 "get_zone_info": false, 00:10:20.840 "zone_management": false, 00:10:20.840 "zone_append": false, 00:10:20.840 "compare": false, 00:10:20.840 "compare_and_write": false, 00:10:20.840 "abort": true, 00:10:20.840 "seek_hole": false, 00:10:20.840 "seek_data": false, 00:10:20.840 "copy": true, 00:10:20.840 "nvme_iov_md": false 00:10:20.840 }, 00:10:20.840 "memory_domains": [ 00:10:20.840 { 00:10:20.840 "dma_device_id": "system", 00:10:20.840 "dma_device_type": 1 00:10:20.840 }, 00:10:20.840 { 00:10:20.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.840 "dma_device_type": 2 00:10:20.840 } 00:10:20.840 ], 00:10:20.840 "driver_specific": {} 00:10:20.840 } 00:10:20.840 ] 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.840 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.098 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:21.098 "name": "Existed_Raid", 00:10:21.098 "uuid": "4049a154-3281-4f7a-be74-b3623845273a", 00:10:21.098 "strip_size_kb": 64, 00:10:21.098 "state": "online", 00:10:21.098 "raid_level": "raid0", 00:10:21.098 "superblock": false, 00:10:21.098 "num_base_bdevs": 3, 00:10:21.098 "num_base_bdevs_discovered": 3, 00:10:21.098 "num_base_bdevs_operational": 3, 00:10:21.098 "base_bdevs_list": [ 00:10:21.098 { 00:10:21.098 "name": "NewBaseBdev", 00:10:21.098 "uuid": "e72f7402-2c60-4fca-a356-abeef6e9fecd", 00:10:21.098 "is_configured": true, 00:10:21.098 "data_offset": 0, 00:10:21.098 "data_size": 65536 00:10:21.098 }, 00:10:21.098 { 00:10:21.098 "name": "BaseBdev2", 00:10:21.098 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:21.098 "is_configured": true, 00:10:21.098 "data_offset": 0, 00:10:21.098 "data_size": 65536 00:10:21.098 }, 00:10:21.098 { 00:10:21.098 "name": "BaseBdev3", 00:10:21.098 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:21.098 "is_configured": true, 00:10:21.098 "data_offset": 0, 00:10:21.098 "data_size": 65536 00:10:21.098 } 00:10:21.098 ] 00:10:21.098 }' 00:10:21.098 11:21:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:21.098 11:21:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.663 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.663 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:21.663 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:21.663 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:21.663 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:21.663 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:21.663 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:21.663 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:21.963 [2024-07-25 11:21:37.666337] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.963 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:21.963 "name": "Existed_Raid", 00:10:21.963 "aliases": [ 00:10:21.963 "4049a154-3281-4f7a-be74-b3623845273a" 00:10:21.963 ], 00:10:21.963 "product_name": "Raid Volume", 00:10:21.963 "block_size": 512, 00:10:21.963 "num_blocks": 196608, 00:10:21.963 "uuid": "4049a154-3281-4f7a-be74-b3623845273a", 00:10:21.963 "assigned_rate_limits": { 00:10:21.963 "rw_ios_per_sec": 0, 00:10:21.963 "rw_mbytes_per_sec": 0, 00:10:21.963 "r_mbytes_per_sec": 0, 00:10:21.963 "w_mbytes_per_sec": 0 00:10:21.963 }, 00:10:21.963 "claimed": false, 00:10:21.963 "zoned": false, 00:10:21.963 "supported_io_types": { 00:10:21.963 "read": true, 00:10:21.963 "write": true, 00:10:21.963 "unmap": true, 00:10:21.963 "flush": true, 00:10:21.963 "reset": true, 00:10:21.963 "nvme_admin": false, 00:10:21.963 "nvme_io": false, 00:10:21.963 "nvme_io_md": false, 00:10:21.963 "write_zeroes": true, 00:10:21.963 "zcopy": false, 00:10:21.963 "get_zone_info": false, 00:10:21.963 "zone_management": false, 00:10:21.963 "zone_append": false, 00:10:21.963 "compare": false, 00:10:21.963 "compare_and_write": false, 00:10:21.963 "abort": false, 00:10:21.963 "seek_hole": false, 00:10:21.963 "seek_data": false, 00:10:21.963 "copy": false, 00:10:21.963 "nvme_iov_md": false 00:10:21.963 }, 00:10:21.963 "memory_domains": [ 00:10:21.963 { 00:10:21.963 "dma_device_id": "system", 00:10:21.963 "dma_device_type": 1 00:10:21.963 }, 00:10:21.963 { 00:10:21.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.963 "dma_device_type": 2 00:10:21.963 }, 00:10:21.963 { 00:10:21.963 "dma_device_id": "system", 00:10:21.963 "dma_device_type": 1 00:10:21.963 }, 00:10:21.963 { 00:10:21.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.963 "dma_device_type": 2 00:10:21.963 }, 00:10:21.963 { 00:10:21.963 "dma_device_id": "system", 00:10:21.963 "dma_device_type": 1 00:10:21.963 }, 00:10:21.963 { 00:10:21.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.963 "dma_device_type": 2 00:10:21.963 } 00:10:21.963 ], 00:10:21.963 "driver_specific": { 00:10:21.963 "raid": { 00:10:21.963 "uuid": "4049a154-3281-4f7a-be74-b3623845273a", 00:10:21.963 "strip_size_kb": 64, 00:10:21.963 "state": "online", 00:10:21.963 "raid_level": "raid0", 00:10:21.963 "superblock": false, 00:10:21.963 "num_base_bdevs": 3, 00:10:21.963 "num_base_bdevs_discovered": 3, 00:10:21.963 "num_base_bdevs_operational": 3, 00:10:21.963 "base_bdevs_list": [ 00:10:21.963 { 00:10:21.963 "name": "NewBaseBdev", 00:10:21.963 "uuid": "e72f7402-2c60-4fca-a356-abeef6e9fecd", 00:10:21.963 "is_configured": true, 00:10:21.963 "data_offset": 0, 00:10:21.963 "data_size": 65536 00:10:21.963 }, 00:10:21.963 { 00:10:21.963 "name": "BaseBdev2", 00:10:21.963 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:21.963 "is_configured": true, 00:10:21.963 "data_offset": 0, 00:10:21.963 "data_size": 65536 00:10:21.963 }, 00:10:21.963 { 00:10:21.963 "name": "BaseBdev3", 00:10:21.963 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:21.963 "is_configured": true, 00:10:21.963 "data_offset": 0, 00:10:21.963 "data_size": 65536 00:10:21.963 } 00:10:21.963 ] 00:10:21.963 } 00:10:21.963 } 00:10:21.963 }' 00:10:21.963 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.963 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:21.963 BaseBdev2 00:10:21.963 BaseBdev3' 00:10:21.963 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:21.963 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:21.963 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:22.220 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:22.220 "name": "NewBaseBdev", 00:10:22.220 "aliases": [ 00:10:22.220 "e72f7402-2c60-4fca-a356-abeef6e9fecd" 00:10:22.220 ], 00:10:22.220 "product_name": "Malloc disk", 00:10:22.220 "block_size": 512, 00:10:22.220 "num_blocks": 65536, 00:10:22.220 "uuid": "e72f7402-2c60-4fca-a356-abeef6e9fecd", 00:10:22.220 "assigned_rate_limits": { 00:10:22.220 "rw_ios_per_sec": 0, 00:10:22.220 "rw_mbytes_per_sec": 0, 00:10:22.220 "r_mbytes_per_sec": 0, 00:10:22.220 "w_mbytes_per_sec": 0 00:10:22.220 }, 00:10:22.220 "claimed": true, 00:10:22.220 "claim_type": "exclusive_write", 00:10:22.220 "zoned": false, 00:10:22.220 "supported_io_types": { 00:10:22.220 "read": true, 00:10:22.220 "write": true, 00:10:22.220 "unmap": true, 00:10:22.220 "flush": true, 00:10:22.220 "reset": true, 00:10:22.220 "nvme_admin": false, 00:10:22.220 "nvme_io": false, 00:10:22.220 "nvme_io_md": false, 00:10:22.220 "write_zeroes": true, 00:10:22.220 "zcopy": true, 00:10:22.220 "get_zone_info": false, 00:10:22.220 "zone_management": false, 00:10:22.220 "zone_append": false, 00:10:22.220 "compare": false, 00:10:22.220 "compare_and_write": false, 00:10:22.220 "abort": true, 00:10:22.220 "seek_hole": false, 00:10:22.220 "seek_data": false, 00:10:22.220 "copy": true, 00:10:22.220 "nvme_iov_md": false 00:10:22.220 }, 00:10:22.220 "memory_domains": [ 00:10:22.220 { 00:10:22.220 "dma_device_id": "system", 00:10:22.220 "dma_device_type": 1 00:10:22.220 }, 00:10:22.220 { 00:10:22.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.220 "dma_device_type": 2 00:10:22.220 } 00:10:22.220 ], 00:10:22.220 "driver_specific": {} 00:10:22.220 }' 00:10:22.220 11:21:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.220 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.220 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:22.220 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.220 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.478 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:22.478 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.478 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.478 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:22.478 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.478 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.478 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:22.478 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:22.478 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:22.478 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:22.736 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:22.736 "name": "BaseBdev2", 00:10:22.736 "aliases": [ 00:10:22.736 "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57" 00:10:22.736 ], 00:10:22.736 "product_name": "Malloc disk", 00:10:22.736 "block_size": 512, 00:10:22.736 "num_blocks": 65536, 00:10:22.736 "uuid": "1ed3d32c-9f2f-42aa-8cb8-b291376cfc57", 00:10:22.736 "assigned_rate_limits": { 00:10:22.736 "rw_ios_per_sec": 0, 00:10:22.736 "rw_mbytes_per_sec": 0, 00:10:22.736 "r_mbytes_per_sec": 0, 00:10:22.736 "w_mbytes_per_sec": 0 00:10:22.736 }, 00:10:22.736 "claimed": true, 00:10:22.736 "claim_type": "exclusive_write", 00:10:22.736 "zoned": false, 00:10:22.736 "supported_io_types": { 00:10:22.736 "read": true, 00:10:22.736 "write": true, 00:10:22.736 "unmap": true, 00:10:22.736 "flush": true, 00:10:22.736 "reset": true, 00:10:22.736 "nvme_admin": false, 00:10:22.736 "nvme_io": false, 00:10:22.736 "nvme_io_md": false, 00:10:22.736 "write_zeroes": true, 00:10:22.736 "zcopy": true, 00:10:22.736 "get_zone_info": false, 00:10:22.736 "zone_management": false, 00:10:22.736 "zone_append": false, 00:10:22.736 "compare": false, 00:10:22.736 "compare_and_write": false, 00:10:22.736 "abort": true, 00:10:22.736 "seek_hole": false, 00:10:22.736 "seek_data": false, 00:10:22.736 "copy": true, 00:10:22.736 "nvme_iov_md": false 00:10:22.736 }, 00:10:22.736 "memory_domains": [ 00:10:22.736 { 00:10:22.736 "dma_device_id": "system", 00:10:22.736 "dma_device_type": 1 00:10:22.736 }, 00:10:22.736 { 00:10:22.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.736 "dma_device_type": 2 00:10:22.736 } 00:10:22.736 ], 00:10:22.736 "driver_specific": {} 00:10:22.736 }' 00:10:22.736 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.994 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.994 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:22.994 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.994 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.994 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:22.994 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.994 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.252 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.252 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.252 11:21:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.252 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:23.252 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:23.252 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:23.252 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:23.510 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:23.510 "name": "BaseBdev3", 00:10:23.510 "aliases": [ 00:10:23.510 "05b6d08a-9adf-4b69-b4ac-7661c8c58519" 00:10:23.510 ], 00:10:23.510 "product_name": "Malloc disk", 00:10:23.510 "block_size": 512, 00:10:23.510 "num_blocks": 65536, 00:10:23.510 "uuid": "05b6d08a-9adf-4b69-b4ac-7661c8c58519", 00:10:23.510 "assigned_rate_limits": { 00:10:23.510 "rw_ios_per_sec": 0, 00:10:23.510 "rw_mbytes_per_sec": 0, 00:10:23.510 "r_mbytes_per_sec": 0, 00:10:23.510 "w_mbytes_per_sec": 0 00:10:23.510 }, 00:10:23.510 "claimed": true, 00:10:23.510 "claim_type": "exclusive_write", 00:10:23.510 "zoned": false, 00:10:23.510 "supported_io_types": { 00:10:23.510 "read": true, 00:10:23.510 "write": true, 00:10:23.510 "unmap": true, 00:10:23.510 "flush": true, 00:10:23.510 "reset": true, 00:10:23.510 "nvme_admin": false, 00:10:23.510 "nvme_io": false, 00:10:23.510 "nvme_io_md": false, 00:10:23.510 "write_zeroes": true, 00:10:23.510 "zcopy": true, 00:10:23.510 "get_zone_info": false, 00:10:23.510 "zone_management": false, 00:10:23.510 "zone_append": false, 00:10:23.510 "compare": false, 00:10:23.510 "compare_and_write": false, 00:10:23.510 "abort": true, 00:10:23.510 "seek_hole": false, 00:10:23.510 "seek_data": false, 00:10:23.510 "copy": true, 00:10:23.510 "nvme_iov_md": false 00:10:23.510 }, 00:10:23.510 "memory_domains": [ 00:10:23.510 { 00:10:23.510 "dma_device_id": "system", 00:10:23.510 "dma_device_type": 1 00:10:23.510 }, 00:10:23.510 { 00:10:23.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.511 "dma_device_type": 2 00:10:23.511 } 00:10:23.511 ], 00:10:23.511 "driver_specific": {} 00:10:23.511 }' 00:10:23.511 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.511 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:23.511 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:23.511 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.769 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:23.769 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:23.769 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.769 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:23.769 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:23.769 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:23.769 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:24.027 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:24.027 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:24.027 [2024-07-25 11:21:39.906528] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.027 [2024-07-25 11:21:39.906751] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.027 [2024-07-25 11:21:39.906974] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.027 [2024-07-25 11:21:39.907072] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.027 [2024-07-25 11:21:39.907090] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 67250 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67250 ']' 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67250 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67250 00:10:24.285 killing process with pid 67250 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67250' 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67250 00:10:24.285 [2024-07-25 11:21:39.952021] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.285 11:21:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67250 00:10:24.543 [2024-07-25 11:21:40.223324] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:25.920 00:10:25.920 real 0m32.409s 00:10:25.920 user 0m59.423s 00:10:25.920 sys 0m4.104s 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.920 ************************************ 00:10:25.920 END TEST raid_state_function_test 00:10:25.920 ************************************ 00:10:25.920 11:21:41 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:25.920 11:21:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:25.920 11:21:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.920 11:21:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.920 ************************************ 00:10:25.920 START TEST raid_state_function_test_sb 00:10:25.920 ************************************ 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:25.920 Process raid pid: 68222 00:10:25.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=68222 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 68222' 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 68222 /var/tmp/spdk-raid.sock 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68222 ']' 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.920 11:21:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.920 [2024-07-25 11:21:41.565036] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:10:25.920 [2024-07-25 11:21:41.565205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.920 [2024-07-25 11:21:41.746672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.229 [2024-07-25 11:21:42.027537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.487 [2024-07-25 11:21:42.231455] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.487 [2024-07-25 11:21:42.231502] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.747 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.747 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:26.747 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:27.006 [2024-07-25 11:21:42.709246] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:27.006 [2024-07-25 11:21:42.709326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:27.006 [2024-07-25 11:21:42.709347] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.006 [2024-07-25 11:21:42.709361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.006 [2024-07-25 11:21:42.709375] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.006 [2024-07-25 11:21:42.709391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.006 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.264 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.264 "name": "Existed_Raid", 00:10:27.264 "uuid": "0fd3c22d-b19f-441b-aee7-dbba2d3f964f", 00:10:27.264 "strip_size_kb": 64, 00:10:27.264 "state": "configuring", 00:10:27.264 "raid_level": "raid0", 00:10:27.264 "superblock": true, 00:10:27.264 "num_base_bdevs": 3, 00:10:27.264 "num_base_bdevs_discovered": 0, 00:10:27.264 "num_base_bdevs_operational": 3, 00:10:27.264 "base_bdevs_list": [ 00:10:27.264 { 00:10:27.264 "name": "BaseBdev1", 00:10:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.264 "is_configured": false, 00:10:27.264 "data_offset": 0, 00:10:27.264 "data_size": 0 00:10:27.264 }, 00:10:27.264 { 00:10:27.264 "name": "BaseBdev2", 00:10:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.264 "is_configured": false, 00:10:27.264 "data_offset": 0, 00:10:27.264 "data_size": 0 00:10:27.264 }, 00:10:27.264 { 00:10:27.264 "name": "BaseBdev3", 00:10:27.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.264 "is_configured": false, 00:10:27.264 "data_offset": 0, 00:10:27.264 "data_size": 0 00:10:27.264 } 00:10:27.264 ] 00:10:27.264 }' 00:10:27.264 11:21:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.264 11:21:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.831 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:28.089 [2024-07-25 11:21:43.793346] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.089 [2024-07-25 11:21:43.793636] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:28.089 11:21:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:28.347 [2024-07-25 11:21:44.057529] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:28.347 [2024-07-25 11:21:44.057603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:28.347 [2024-07-25 11:21:44.057649] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.347 [2024-07-25 11:21:44.057666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.347 [2024-07-25 11:21:44.057680] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:28.347 [2024-07-25 11:21:44.057701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:28.347 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.606 [2024-07-25 11:21:44.338239] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.606 BaseBdev1 00:10:28.606 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:28.606 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:28.606 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.606 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:28.606 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.606 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.606 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:28.864 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.123 [ 00:10:29.123 { 00:10:29.123 "name": "BaseBdev1", 00:10:29.123 "aliases": [ 00:10:29.123 "2b84a9a1-9e74-4ac2-86c3-aa1d3f319eb6" 00:10:29.123 ], 00:10:29.123 "product_name": "Malloc disk", 00:10:29.123 "block_size": 512, 00:10:29.123 "num_blocks": 65536, 00:10:29.123 "uuid": "2b84a9a1-9e74-4ac2-86c3-aa1d3f319eb6", 00:10:29.123 "assigned_rate_limits": { 00:10:29.123 "rw_ios_per_sec": 0, 00:10:29.123 "rw_mbytes_per_sec": 0, 00:10:29.123 "r_mbytes_per_sec": 0, 00:10:29.123 "w_mbytes_per_sec": 0 00:10:29.123 }, 00:10:29.123 "claimed": true, 00:10:29.123 "claim_type": "exclusive_write", 00:10:29.123 "zoned": false, 00:10:29.123 "supported_io_types": { 00:10:29.123 "read": true, 00:10:29.123 "write": true, 00:10:29.123 "unmap": true, 00:10:29.123 "flush": true, 00:10:29.123 "reset": true, 00:10:29.123 "nvme_admin": false, 00:10:29.123 "nvme_io": false, 00:10:29.123 "nvme_io_md": false, 00:10:29.123 "write_zeroes": true, 00:10:29.123 "zcopy": true, 00:10:29.123 "get_zone_info": false, 00:10:29.123 "zone_management": false, 00:10:29.123 "zone_append": false, 00:10:29.123 "compare": false, 00:10:29.123 "compare_and_write": false, 00:10:29.123 "abort": true, 00:10:29.123 "seek_hole": false, 00:10:29.123 "seek_data": false, 00:10:29.123 "copy": true, 00:10:29.123 "nvme_iov_md": false 00:10:29.123 }, 00:10:29.123 "memory_domains": [ 00:10:29.123 { 00:10:29.123 "dma_device_id": "system", 00:10:29.123 "dma_device_type": 1 00:10:29.123 }, 00:10:29.123 { 00:10:29.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.123 "dma_device_type": 2 00:10:29.123 } 00:10:29.123 ], 00:10:29.123 "driver_specific": {} 00:10:29.123 } 00:10:29.123 ] 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.123 11:21:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.382 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:29.382 "name": "Existed_Raid", 00:10:29.382 "uuid": "9b442b3c-9e68-4e8b-bd67-3c4f7e5baac5", 00:10:29.382 "strip_size_kb": 64, 00:10:29.382 "state": "configuring", 00:10:29.382 "raid_level": "raid0", 00:10:29.382 "superblock": true, 00:10:29.382 "num_base_bdevs": 3, 00:10:29.382 "num_base_bdevs_discovered": 1, 00:10:29.382 "num_base_bdevs_operational": 3, 00:10:29.382 "base_bdevs_list": [ 00:10:29.382 { 00:10:29.382 "name": "BaseBdev1", 00:10:29.382 "uuid": "2b84a9a1-9e74-4ac2-86c3-aa1d3f319eb6", 00:10:29.382 "is_configured": true, 00:10:29.382 "data_offset": 2048, 00:10:29.382 "data_size": 63488 00:10:29.382 }, 00:10:29.382 { 00:10:29.382 "name": "BaseBdev2", 00:10:29.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.382 "is_configured": false, 00:10:29.382 "data_offset": 0, 00:10:29.382 "data_size": 0 00:10:29.382 }, 00:10:29.382 { 00:10:29.382 "name": "BaseBdev3", 00:10:29.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.382 "is_configured": false, 00:10:29.382 "data_offset": 0, 00:10:29.382 "data_size": 0 00:10:29.382 } 00:10:29.382 ] 00:10:29.382 }' 00:10:29.382 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:29.382 11:21:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.316 11:21:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:30.316 [2024-07-25 11:21:46.106862] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.316 [2024-07-25 11:21:46.106942] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:30.316 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:30.574 [2024-07-25 11:21:46.370963] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.574 [2024-07-25 11:21:46.373334] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.574 [2024-07-25 11:21:46.373386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.574 [2024-07-25 11:21:46.373407] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.574 [2024-07-25 11:21:46.373421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.574 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.832 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.832 "name": "Existed_Raid", 00:10:30.832 "uuid": "46bf246c-f775-4169-8e7a-1eddb140bc81", 00:10:30.832 "strip_size_kb": 64, 00:10:30.832 "state": "configuring", 00:10:30.832 "raid_level": "raid0", 00:10:30.832 "superblock": true, 00:10:30.832 "num_base_bdevs": 3, 00:10:30.832 "num_base_bdevs_discovered": 1, 00:10:30.832 "num_base_bdevs_operational": 3, 00:10:30.832 "base_bdevs_list": [ 00:10:30.832 { 00:10:30.832 "name": "BaseBdev1", 00:10:30.832 "uuid": "2b84a9a1-9e74-4ac2-86c3-aa1d3f319eb6", 00:10:30.832 "is_configured": true, 00:10:30.832 "data_offset": 2048, 00:10:30.832 "data_size": 63488 00:10:30.832 }, 00:10:30.832 { 00:10:30.832 "name": "BaseBdev2", 00:10:30.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.832 "is_configured": false, 00:10:30.832 "data_offset": 0, 00:10:30.832 "data_size": 0 00:10:30.832 }, 00:10:30.832 { 00:10:30.832 "name": "BaseBdev3", 00:10:30.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.832 "is_configured": false, 00:10:30.832 "data_offset": 0, 00:10:30.832 "data_size": 0 00:10:30.832 } 00:10:30.832 ] 00:10:30.832 }' 00:10:30.832 11:21:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.832 11:21:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.801 11:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.801 [2024-07-25 11:21:47.602841] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.801 BaseBdev2 00:10:31.801 11:21:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:31.801 11:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:31.801 11:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.801 11:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.801 11:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.802 11:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.802 11:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:32.060 11:21:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:32.317 [ 00:10:32.317 { 00:10:32.317 "name": "BaseBdev2", 00:10:32.317 "aliases": [ 00:10:32.317 "3b39c494-3cfc-4305-8ecc-b5053e87759a" 00:10:32.317 ], 00:10:32.317 "product_name": "Malloc disk", 00:10:32.317 "block_size": 512, 00:10:32.317 "num_blocks": 65536, 00:10:32.317 "uuid": "3b39c494-3cfc-4305-8ecc-b5053e87759a", 00:10:32.317 "assigned_rate_limits": { 00:10:32.317 "rw_ios_per_sec": 0, 00:10:32.317 "rw_mbytes_per_sec": 0, 00:10:32.317 "r_mbytes_per_sec": 0, 00:10:32.317 "w_mbytes_per_sec": 0 00:10:32.317 }, 00:10:32.317 "claimed": true, 00:10:32.317 "claim_type": "exclusive_write", 00:10:32.317 "zoned": false, 00:10:32.317 "supported_io_types": { 00:10:32.318 "read": true, 00:10:32.318 "write": true, 00:10:32.318 "unmap": true, 00:10:32.318 "flush": true, 00:10:32.318 "reset": true, 00:10:32.318 "nvme_admin": false, 00:10:32.318 "nvme_io": false, 00:10:32.318 "nvme_io_md": false, 00:10:32.318 "write_zeroes": true, 00:10:32.318 "zcopy": true, 00:10:32.318 "get_zone_info": false, 00:10:32.318 "zone_management": false, 00:10:32.318 "zone_append": false, 00:10:32.318 "compare": false, 00:10:32.318 "compare_and_write": false, 00:10:32.318 "abort": true, 00:10:32.318 "seek_hole": false, 00:10:32.318 "seek_data": false, 00:10:32.318 "copy": true, 00:10:32.318 "nvme_iov_md": false 00:10:32.318 }, 00:10:32.318 "memory_domains": [ 00:10:32.318 { 00:10:32.318 "dma_device_id": "system", 00:10:32.318 "dma_device_type": 1 00:10:32.318 }, 00:10:32.318 { 00:10:32.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.318 "dma_device_type": 2 00:10:32.318 } 00:10:32.318 ], 00:10:32.318 "driver_specific": {} 00:10:32.318 } 00:10:32.318 ] 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.318 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.576 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:32.576 "name": "Existed_Raid", 00:10:32.576 "uuid": "46bf246c-f775-4169-8e7a-1eddb140bc81", 00:10:32.576 "strip_size_kb": 64, 00:10:32.576 "state": "configuring", 00:10:32.576 "raid_level": "raid0", 00:10:32.576 "superblock": true, 00:10:32.576 "num_base_bdevs": 3, 00:10:32.576 "num_base_bdevs_discovered": 2, 00:10:32.576 "num_base_bdevs_operational": 3, 00:10:32.576 "base_bdevs_list": [ 00:10:32.576 { 00:10:32.576 "name": "BaseBdev1", 00:10:32.576 "uuid": "2b84a9a1-9e74-4ac2-86c3-aa1d3f319eb6", 00:10:32.576 "is_configured": true, 00:10:32.576 "data_offset": 2048, 00:10:32.576 "data_size": 63488 00:10:32.576 }, 00:10:32.576 { 00:10:32.576 "name": "BaseBdev2", 00:10:32.576 "uuid": "3b39c494-3cfc-4305-8ecc-b5053e87759a", 00:10:32.576 "is_configured": true, 00:10:32.576 "data_offset": 2048, 00:10:32.576 "data_size": 63488 00:10:32.576 }, 00:10:32.576 { 00:10:32.576 "name": "BaseBdev3", 00:10:32.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.576 "is_configured": false, 00:10:32.576 "data_offset": 0, 00:10:32.576 "data_size": 0 00:10:32.576 } 00:10:32.576 ] 00:10:32.576 }' 00:10:32.576 11:21:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:32.576 11:21:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.141 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.399 [2024-07-25 11:21:49.269061] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.399 [2024-07-25 11:21:49.269352] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:33.399 [2024-07-25 11:21:49.269374] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:33.399 [2024-07-25 11:21:49.269742] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:33.399 [2024-07-25 11:21:49.269957] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:33.399 [2024-07-25 11:21:49.269982] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:33.399 BaseBdev3 00:10:33.399 [2024-07-25 11:21:49.270152] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.658 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:33.658 11:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:33.658 11:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.658 11:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:33.658 11:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.658 11:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.658 11:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:33.917 [ 00:10:33.917 { 00:10:33.917 "name": "BaseBdev3", 00:10:33.917 "aliases": [ 00:10:33.917 "f92140bf-cd44-4bfc-b69b-2f34cf125034" 00:10:33.917 ], 00:10:33.917 "product_name": "Malloc disk", 00:10:33.917 "block_size": 512, 00:10:33.917 "num_blocks": 65536, 00:10:33.917 "uuid": "f92140bf-cd44-4bfc-b69b-2f34cf125034", 00:10:33.917 "assigned_rate_limits": { 00:10:33.917 "rw_ios_per_sec": 0, 00:10:33.917 "rw_mbytes_per_sec": 0, 00:10:33.917 "r_mbytes_per_sec": 0, 00:10:33.917 "w_mbytes_per_sec": 0 00:10:33.917 }, 00:10:33.917 "claimed": true, 00:10:33.917 "claim_type": "exclusive_write", 00:10:33.917 "zoned": false, 00:10:33.917 "supported_io_types": { 00:10:33.917 "read": true, 00:10:33.917 "write": true, 00:10:33.917 "unmap": true, 00:10:33.917 "flush": true, 00:10:33.917 "reset": true, 00:10:33.917 "nvme_admin": false, 00:10:33.917 "nvme_io": false, 00:10:33.917 "nvme_io_md": false, 00:10:33.917 "write_zeroes": true, 00:10:33.917 "zcopy": true, 00:10:33.917 "get_zone_info": false, 00:10:33.917 "zone_management": false, 00:10:33.917 "zone_append": false, 00:10:33.917 "compare": false, 00:10:33.917 "compare_and_write": false, 00:10:33.917 "abort": true, 00:10:33.917 "seek_hole": false, 00:10:33.917 "seek_data": false, 00:10:33.917 "copy": true, 00:10:33.917 "nvme_iov_md": false 00:10:33.917 }, 00:10:33.917 "memory_domains": [ 00:10:33.917 { 00:10:33.917 "dma_device_id": "system", 00:10:33.917 "dma_device_type": 1 00:10:33.917 }, 00:10:33.917 { 00:10:33.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.917 "dma_device_type": 2 00:10:33.917 } 00:10:33.917 ], 00:10:33.917 "driver_specific": {} 00:10:33.917 } 00:10:33.917 ] 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.917 11:21:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.175 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:34.175 "name": "Existed_Raid", 00:10:34.175 "uuid": "46bf246c-f775-4169-8e7a-1eddb140bc81", 00:10:34.175 "strip_size_kb": 64, 00:10:34.175 "state": "online", 00:10:34.175 "raid_level": "raid0", 00:10:34.175 "superblock": true, 00:10:34.175 "num_base_bdevs": 3, 00:10:34.175 "num_base_bdevs_discovered": 3, 00:10:34.175 "num_base_bdevs_operational": 3, 00:10:34.175 "base_bdevs_list": [ 00:10:34.175 { 00:10:34.175 "name": "BaseBdev1", 00:10:34.175 "uuid": "2b84a9a1-9e74-4ac2-86c3-aa1d3f319eb6", 00:10:34.175 "is_configured": true, 00:10:34.175 "data_offset": 2048, 00:10:34.175 "data_size": 63488 00:10:34.175 }, 00:10:34.175 { 00:10:34.175 "name": "BaseBdev2", 00:10:34.175 "uuid": "3b39c494-3cfc-4305-8ecc-b5053e87759a", 00:10:34.175 "is_configured": true, 00:10:34.175 "data_offset": 2048, 00:10:34.175 "data_size": 63488 00:10:34.175 }, 00:10:34.175 { 00:10:34.175 "name": "BaseBdev3", 00:10:34.175 "uuid": "f92140bf-cd44-4bfc-b69b-2f34cf125034", 00:10:34.175 "is_configured": true, 00:10:34.175 "data_offset": 2048, 00:10:34.175 "data_size": 63488 00:10:34.175 } 00:10:34.175 ] 00:10:34.175 }' 00:10:34.175 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:34.175 11:21:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.118 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.118 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:35.118 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:35.118 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:35.118 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:35.118 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:35.118 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:35.118 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:35.118 [2024-07-25 11:21:50.909949] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.118 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:35.118 "name": "Existed_Raid", 00:10:35.118 "aliases": [ 00:10:35.118 "46bf246c-f775-4169-8e7a-1eddb140bc81" 00:10:35.118 ], 00:10:35.118 "product_name": "Raid Volume", 00:10:35.118 "block_size": 512, 00:10:35.118 "num_blocks": 190464, 00:10:35.118 "uuid": "46bf246c-f775-4169-8e7a-1eddb140bc81", 00:10:35.118 "assigned_rate_limits": { 00:10:35.118 "rw_ios_per_sec": 0, 00:10:35.118 "rw_mbytes_per_sec": 0, 00:10:35.118 "r_mbytes_per_sec": 0, 00:10:35.118 "w_mbytes_per_sec": 0 00:10:35.118 }, 00:10:35.118 "claimed": false, 00:10:35.118 "zoned": false, 00:10:35.118 "supported_io_types": { 00:10:35.118 "read": true, 00:10:35.118 "write": true, 00:10:35.118 "unmap": true, 00:10:35.118 "flush": true, 00:10:35.118 "reset": true, 00:10:35.118 "nvme_admin": false, 00:10:35.118 "nvme_io": false, 00:10:35.118 "nvme_io_md": false, 00:10:35.118 "write_zeroes": true, 00:10:35.118 "zcopy": false, 00:10:35.118 "get_zone_info": false, 00:10:35.118 "zone_management": false, 00:10:35.118 "zone_append": false, 00:10:35.118 "compare": false, 00:10:35.118 "compare_and_write": false, 00:10:35.118 "abort": false, 00:10:35.118 "seek_hole": false, 00:10:35.118 "seek_data": false, 00:10:35.118 "copy": false, 00:10:35.118 "nvme_iov_md": false 00:10:35.118 }, 00:10:35.118 "memory_domains": [ 00:10:35.118 { 00:10:35.118 "dma_device_id": "system", 00:10:35.118 "dma_device_type": 1 00:10:35.118 }, 00:10:35.118 { 00:10:35.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.118 "dma_device_type": 2 00:10:35.119 }, 00:10:35.119 { 00:10:35.119 "dma_device_id": "system", 00:10:35.119 "dma_device_type": 1 00:10:35.119 }, 00:10:35.119 { 00:10:35.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.119 "dma_device_type": 2 00:10:35.119 }, 00:10:35.119 { 00:10:35.119 "dma_device_id": "system", 00:10:35.119 "dma_device_type": 1 00:10:35.119 }, 00:10:35.119 { 00:10:35.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.119 "dma_device_type": 2 00:10:35.119 } 00:10:35.119 ], 00:10:35.119 "driver_specific": { 00:10:35.119 "raid": { 00:10:35.119 "uuid": "46bf246c-f775-4169-8e7a-1eddb140bc81", 00:10:35.119 "strip_size_kb": 64, 00:10:35.119 "state": "online", 00:10:35.119 "raid_level": "raid0", 00:10:35.119 "superblock": true, 00:10:35.119 "num_base_bdevs": 3, 00:10:35.119 "num_base_bdevs_discovered": 3, 00:10:35.119 "num_base_bdevs_operational": 3, 00:10:35.119 "base_bdevs_list": [ 00:10:35.119 { 00:10:35.119 "name": "BaseBdev1", 00:10:35.119 "uuid": "2b84a9a1-9e74-4ac2-86c3-aa1d3f319eb6", 00:10:35.119 "is_configured": true, 00:10:35.119 "data_offset": 2048, 00:10:35.119 "data_size": 63488 00:10:35.119 }, 00:10:35.119 { 00:10:35.119 "name": "BaseBdev2", 00:10:35.120 "uuid": "3b39c494-3cfc-4305-8ecc-b5053e87759a", 00:10:35.120 "is_configured": true, 00:10:35.120 "data_offset": 2048, 00:10:35.120 "data_size": 63488 00:10:35.120 }, 00:10:35.120 { 00:10:35.120 "name": "BaseBdev3", 00:10:35.120 "uuid": "f92140bf-cd44-4bfc-b69b-2f34cf125034", 00:10:35.120 "is_configured": true, 00:10:35.120 "data_offset": 2048, 00:10:35.120 "data_size": 63488 00:10:35.120 } 00:10:35.120 ] 00:10:35.120 } 00:10:35.120 } 00:10:35.120 }' 00:10:35.120 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.120 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:35.120 BaseBdev2 00:10:35.120 BaseBdev3' 00:10:35.120 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:35.120 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:35.120 11:21:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:35.382 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:35.382 "name": "BaseBdev1", 00:10:35.382 "aliases": [ 00:10:35.382 "2b84a9a1-9e74-4ac2-86c3-aa1d3f319eb6" 00:10:35.382 ], 00:10:35.382 "product_name": "Malloc disk", 00:10:35.382 "block_size": 512, 00:10:35.382 "num_blocks": 65536, 00:10:35.382 "uuid": "2b84a9a1-9e74-4ac2-86c3-aa1d3f319eb6", 00:10:35.382 "assigned_rate_limits": { 00:10:35.382 "rw_ios_per_sec": 0, 00:10:35.382 "rw_mbytes_per_sec": 0, 00:10:35.382 "r_mbytes_per_sec": 0, 00:10:35.382 "w_mbytes_per_sec": 0 00:10:35.382 }, 00:10:35.382 "claimed": true, 00:10:35.382 "claim_type": "exclusive_write", 00:10:35.382 "zoned": false, 00:10:35.382 "supported_io_types": { 00:10:35.382 "read": true, 00:10:35.382 "write": true, 00:10:35.382 "unmap": true, 00:10:35.382 "flush": true, 00:10:35.382 "reset": true, 00:10:35.382 "nvme_admin": false, 00:10:35.382 "nvme_io": false, 00:10:35.382 "nvme_io_md": false, 00:10:35.382 "write_zeroes": true, 00:10:35.382 "zcopy": true, 00:10:35.382 "get_zone_info": false, 00:10:35.382 "zone_management": false, 00:10:35.382 "zone_append": false, 00:10:35.382 "compare": false, 00:10:35.382 "compare_and_write": false, 00:10:35.382 "abort": true, 00:10:35.382 "seek_hole": false, 00:10:35.382 "seek_data": false, 00:10:35.382 "copy": true, 00:10:35.382 "nvme_iov_md": false 00:10:35.382 }, 00:10:35.382 "memory_domains": [ 00:10:35.382 { 00:10:35.382 "dma_device_id": "system", 00:10:35.382 "dma_device_type": 1 00:10:35.382 }, 00:10:35.382 { 00:10:35.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.382 "dma_device_type": 2 00:10:35.382 } 00:10:35.382 ], 00:10:35.382 "driver_specific": {} 00:10:35.382 }' 00:10:35.382 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:35.693 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:35.693 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:35.693 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:35.693 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:35.693 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:35.693 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:35.693 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:35.693 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:35.693 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:35.693 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:35.951 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:35.951 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:35.951 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:35.951 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:36.209 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:36.209 "name": "BaseBdev2", 00:10:36.209 "aliases": [ 00:10:36.209 "3b39c494-3cfc-4305-8ecc-b5053e87759a" 00:10:36.209 ], 00:10:36.209 "product_name": "Malloc disk", 00:10:36.209 "block_size": 512, 00:10:36.209 "num_blocks": 65536, 00:10:36.209 "uuid": "3b39c494-3cfc-4305-8ecc-b5053e87759a", 00:10:36.209 "assigned_rate_limits": { 00:10:36.209 "rw_ios_per_sec": 0, 00:10:36.209 "rw_mbytes_per_sec": 0, 00:10:36.209 "r_mbytes_per_sec": 0, 00:10:36.209 "w_mbytes_per_sec": 0 00:10:36.209 }, 00:10:36.209 "claimed": true, 00:10:36.209 "claim_type": "exclusive_write", 00:10:36.209 "zoned": false, 00:10:36.209 "supported_io_types": { 00:10:36.209 "read": true, 00:10:36.209 "write": true, 00:10:36.209 "unmap": true, 00:10:36.209 "flush": true, 00:10:36.209 "reset": true, 00:10:36.209 "nvme_admin": false, 00:10:36.209 "nvme_io": false, 00:10:36.209 "nvme_io_md": false, 00:10:36.209 "write_zeroes": true, 00:10:36.209 "zcopy": true, 00:10:36.209 "get_zone_info": false, 00:10:36.209 "zone_management": false, 00:10:36.209 "zone_append": false, 00:10:36.209 "compare": false, 00:10:36.209 "compare_and_write": false, 00:10:36.209 "abort": true, 00:10:36.209 "seek_hole": false, 00:10:36.209 "seek_data": false, 00:10:36.209 "copy": true, 00:10:36.209 "nvme_iov_md": false 00:10:36.209 }, 00:10:36.209 "memory_domains": [ 00:10:36.209 { 00:10:36.209 "dma_device_id": "system", 00:10:36.209 "dma_device_type": 1 00:10:36.209 }, 00:10:36.209 { 00:10:36.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.209 "dma_device_type": 2 00:10:36.209 } 00:10:36.209 ], 00:10:36.209 "driver_specific": {} 00:10:36.209 }' 00:10:36.209 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:36.209 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:36.209 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:36.209 11:21:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:36.209 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:36.209 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:36.209 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:36.466 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:36.466 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:36.466 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:36.466 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:36.466 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:36.466 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:36.466 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:36.466 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:37.030 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:37.030 "name": "BaseBdev3", 00:10:37.030 "aliases": [ 00:10:37.030 "f92140bf-cd44-4bfc-b69b-2f34cf125034" 00:10:37.030 ], 00:10:37.030 "product_name": "Malloc disk", 00:10:37.030 "block_size": 512, 00:10:37.030 "num_blocks": 65536, 00:10:37.030 "uuid": "f92140bf-cd44-4bfc-b69b-2f34cf125034", 00:10:37.030 "assigned_rate_limits": { 00:10:37.030 "rw_ios_per_sec": 0, 00:10:37.030 "rw_mbytes_per_sec": 0, 00:10:37.030 "r_mbytes_per_sec": 0, 00:10:37.030 "w_mbytes_per_sec": 0 00:10:37.030 }, 00:10:37.030 "claimed": true, 00:10:37.030 "claim_type": "exclusive_write", 00:10:37.030 "zoned": false, 00:10:37.030 "supported_io_types": { 00:10:37.030 "read": true, 00:10:37.030 "write": true, 00:10:37.030 "unmap": true, 00:10:37.030 "flush": true, 00:10:37.030 "reset": true, 00:10:37.030 "nvme_admin": false, 00:10:37.030 "nvme_io": false, 00:10:37.030 "nvme_io_md": false, 00:10:37.030 "write_zeroes": true, 00:10:37.030 "zcopy": true, 00:10:37.030 "get_zone_info": false, 00:10:37.030 "zone_management": false, 00:10:37.030 "zone_append": false, 00:10:37.030 "compare": false, 00:10:37.030 "compare_and_write": false, 00:10:37.030 "abort": true, 00:10:37.030 "seek_hole": false, 00:10:37.030 "seek_data": false, 00:10:37.030 "copy": true, 00:10:37.030 "nvme_iov_md": false 00:10:37.030 }, 00:10:37.030 "memory_domains": [ 00:10:37.030 { 00:10:37.030 "dma_device_id": "system", 00:10:37.030 "dma_device_type": 1 00:10:37.030 }, 00:10:37.030 { 00:10:37.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.030 "dma_device_type": 2 00:10:37.030 } 00:10:37.030 ], 00:10:37.030 "driver_specific": {} 00:10:37.030 }' 00:10:37.030 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.030 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.030 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:37.030 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.030 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.030 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:37.030 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.030 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.288 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:37.288 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.288 11:21:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.288 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:37.288 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:37.546 [2024-07-25 11:21:53.314232] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.546 [2024-07-25 11:21:53.314464] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.546 [2024-07-25 11:21:53.314570] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.546 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:37.546 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:10:37.546 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:37.546 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.547 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.804 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.804 "name": "Existed_Raid", 00:10:37.804 "uuid": "46bf246c-f775-4169-8e7a-1eddb140bc81", 00:10:37.804 "strip_size_kb": 64, 00:10:37.804 "state": "offline", 00:10:37.804 "raid_level": "raid0", 00:10:37.804 "superblock": true, 00:10:37.804 "num_base_bdevs": 3, 00:10:37.804 "num_base_bdevs_discovered": 2, 00:10:37.804 "num_base_bdevs_operational": 2, 00:10:37.804 "base_bdevs_list": [ 00:10:37.804 { 00:10:37.804 "name": null, 00:10:37.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.804 "is_configured": false, 00:10:37.804 "data_offset": 2048, 00:10:37.804 "data_size": 63488 00:10:37.804 }, 00:10:37.804 { 00:10:37.804 "name": "BaseBdev2", 00:10:37.804 "uuid": "3b39c494-3cfc-4305-8ecc-b5053e87759a", 00:10:37.804 "is_configured": true, 00:10:37.804 "data_offset": 2048, 00:10:37.804 "data_size": 63488 00:10:37.804 }, 00:10:37.804 { 00:10:37.804 "name": "BaseBdev3", 00:10:37.804 "uuid": "f92140bf-cd44-4bfc-b69b-2f34cf125034", 00:10:37.804 "is_configured": true, 00:10:37.804 "data_offset": 2048, 00:10:37.804 "data_size": 63488 00:10:37.804 } 00:10:37.804 ] 00:10:37.804 }' 00:10:37.804 11:21:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.804 11:21:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.736 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:38.736 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:38.736 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:38.736 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.736 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:38.736 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.736 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:38.994 [2024-07-25 11:21:54.874164] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.252 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:39.252 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:39.252 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.252 11:21:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:39.510 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:39.510 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.511 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:39.769 [2024-07-25 11:21:55.484756] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.769 [2024-07-25 11:21:55.484829] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:39.769 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:39.769 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:39.769 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.769 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:40.027 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:40.027 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:40.027 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:40.027 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:40.027 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:40.027 11:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.285 BaseBdev2 00:10:40.285 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:40.285 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:40.285 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:40.285 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:40.285 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:40.285 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:40.285 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:40.543 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.801 [ 00:10:40.801 { 00:10:40.801 "name": "BaseBdev2", 00:10:40.801 "aliases": [ 00:10:40.801 "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9" 00:10:40.801 ], 00:10:40.801 "product_name": "Malloc disk", 00:10:40.801 "block_size": 512, 00:10:40.801 "num_blocks": 65536, 00:10:40.801 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:40.801 "assigned_rate_limits": { 00:10:40.801 "rw_ios_per_sec": 0, 00:10:40.801 "rw_mbytes_per_sec": 0, 00:10:40.801 "r_mbytes_per_sec": 0, 00:10:40.801 "w_mbytes_per_sec": 0 00:10:40.801 }, 00:10:40.801 "claimed": false, 00:10:40.801 "zoned": false, 00:10:40.801 "supported_io_types": { 00:10:40.801 "read": true, 00:10:40.801 "write": true, 00:10:40.801 "unmap": true, 00:10:40.801 "flush": true, 00:10:40.801 "reset": true, 00:10:40.801 "nvme_admin": false, 00:10:40.801 "nvme_io": false, 00:10:40.801 "nvme_io_md": false, 00:10:40.801 "write_zeroes": true, 00:10:40.801 "zcopy": true, 00:10:40.801 "get_zone_info": false, 00:10:40.801 "zone_management": false, 00:10:40.801 "zone_append": false, 00:10:40.801 "compare": false, 00:10:40.801 "compare_and_write": false, 00:10:40.801 "abort": true, 00:10:40.801 "seek_hole": false, 00:10:40.801 "seek_data": false, 00:10:40.801 "copy": true, 00:10:40.801 "nvme_iov_md": false 00:10:40.801 }, 00:10:40.801 "memory_domains": [ 00:10:40.801 { 00:10:40.801 "dma_device_id": "system", 00:10:40.801 "dma_device_type": 1 00:10:40.801 }, 00:10:40.801 { 00:10:40.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.801 "dma_device_type": 2 00:10:40.801 } 00:10:40.801 ], 00:10:40.801 "driver_specific": {} 00:10:40.801 } 00:10:40.801 ] 00:10:41.059 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:41.059 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:41.059 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:41.059 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.059 BaseBdev3 00:10:41.317 11:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:41.317 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:41.317 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:41.317 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:41.317 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:41.317 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:41.317 11:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:41.317 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.576 [ 00:10:41.576 { 00:10:41.576 "name": "BaseBdev3", 00:10:41.576 "aliases": [ 00:10:41.576 "5c91154e-6379-4c40-9f9e-13f9d000ab68" 00:10:41.576 ], 00:10:41.576 "product_name": "Malloc disk", 00:10:41.576 "block_size": 512, 00:10:41.576 "num_blocks": 65536, 00:10:41.576 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:41.576 "assigned_rate_limits": { 00:10:41.576 "rw_ios_per_sec": 0, 00:10:41.576 "rw_mbytes_per_sec": 0, 00:10:41.576 "r_mbytes_per_sec": 0, 00:10:41.576 "w_mbytes_per_sec": 0 00:10:41.576 }, 00:10:41.576 "claimed": false, 00:10:41.576 "zoned": false, 00:10:41.576 "supported_io_types": { 00:10:41.576 "read": true, 00:10:41.576 "write": true, 00:10:41.576 "unmap": true, 00:10:41.576 "flush": true, 00:10:41.576 "reset": true, 00:10:41.576 "nvme_admin": false, 00:10:41.576 "nvme_io": false, 00:10:41.576 "nvme_io_md": false, 00:10:41.576 "write_zeroes": true, 00:10:41.576 "zcopy": true, 00:10:41.576 "get_zone_info": false, 00:10:41.576 "zone_management": false, 00:10:41.576 "zone_append": false, 00:10:41.576 "compare": false, 00:10:41.576 "compare_and_write": false, 00:10:41.576 "abort": true, 00:10:41.576 "seek_hole": false, 00:10:41.576 "seek_data": false, 00:10:41.576 "copy": true, 00:10:41.576 "nvme_iov_md": false 00:10:41.576 }, 00:10:41.576 "memory_domains": [ 00:10:41.576 { 00:10:41.576 "dma_device_id": "system", 00:10:41.576 "dma_device_type": 1 00:10:41.576 }, 00:10:41.576 { 00:10:41.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.576 "dma_device_type": 2 00:10:41.576 } 00:10:41.576 ], 00:10:41.576 "driver_specific": {} 00:10:41.576 } 00:10:41.576 ] 00:10:41.576 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:41.576 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:41.576 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:41.576 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:41.834 [2024-07-25 11:21:57.629043] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:41.834 [2024-07-25 11:21:57.629130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:41.835 [2024-07-25 11:21:57.629190] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.835 [2024-07-25 11:21:57.631775] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.835 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.093 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:42.093 "name": "Existed_Raid", 00:10:42.093 "uuid": "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac", 00:10:42.093 "strip_size_kb": 64, 00:10:42.093 "state": "configuring", 00:10:42.093 "raid_level": "raid0", 00:10:42.093 "superblock": true, 00:10:42.093 "num_base_bdevs": 3, 00:10:42.093 "num_base_bdevs_discovered": 2, 00:10:42.093 "num_base_bdevs_operational": 3, 00:10:42.093 "base_bdevs_list": [ 00:10:42.093 { 00:10:42.093 "name": "BaseBdev1", 00:10:42.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.093 "is_configured": false, 00:10:42.093 "data_offset": 0, 00:10:42.093 "data_size": 0 00:10:42.093 }, 00:10:42.093 { 00:10:42.093 "name": "BaseBdev2", 00:10:42.093 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:42.093 "is_configured": true, 00:10:42.093 "data_offset": 2048, 00:10:42.093 "data_size": 63488 00:10:42.093 }, 00:10:42.093 { 00:10:42.093 "name": "BaseBdev3", 00:10:42.093 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:42.093 "is_configured": true, 00:10:42.093 "data_offset": 2048, 00:10:42.093 "data_size": 63488 00:10:42.093 } 00:10:42.093 ] 00:10:42.093 }' 00:10:42.093 11:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:42.093 11:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.028 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:43.028 [2024-07-25 11:21:58.801310] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:43.028 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:43.028 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:43.028 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:43.028 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:43.028 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:43.029 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:43.029 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.029 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.029 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.029 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.029 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.029 11:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.287 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:43.287 "name": "Existed_Raid", 00:10:43.287 "uuid": "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac", 00:10:43.287 "strip_size_kb": 64, 00:10:43.287 "state": "configuring", 00:10:43.287 "raid_level": "raid0", 00:10:43.287 "superblock": true, 00:10:43.287 "num_base_bdevs": 3, 00:10:43.287 "num_base_bdevs_discovered": 1, 00:10:43.287 "num_base_bdevs_operational": 3, 00:10:43.287 "base_bdevs_list": [ 00:10:43.287 { 00:10:43.287 "name": "BaseBdev1", 00:10:43.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.287 "is_configured": false, 00:10:43.287 "data_offset": 0, 00:10:43.287 "data_size": 0 00:10:43.287 }, 00:10:43.287 { 00:10:43.287 "name": null, 00:10:43.287 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:43.287 "is_configured": false, 00:10:43.287 "data_offset": 2048, 00:10:43.287 "data_size": 63488 00:10:43.287 }, 00:10:43.287 { 00:10:43.287 "name": "BaseBdev3", 00:10:43.287 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:43.287 "is_configured": true, 00:10:43.287 "data_offset": 2048, 00:10:43.287 "data_size": 63488 00:10:43.287 } 00:10:43.287 ] 00:10:43.287 }' 00:10:43.287 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:43.287 11:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.222 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.222 11:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:44.222 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:44.222 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:44.480 [2024-07-25 11:22:00.282128] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.480 BaseBdev1 00:10:44.480 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:44.480 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:44.480 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.480 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:44.480 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.480 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.480 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:44.737 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:44.996 [ 00:10:44.996 { 00:10:44.996 "name": "BaseBdev1", 00:10:44.996 "aliases": [ 00:10:44.996 "b7676f7d-0ab9-49c4-b602-9bec968d6cdb" 00:10:44.996 ], 00:10:44.996 "product_name": "Malloc disk", 00:10:44.996 "block_size": 512, 00:10:44.996 "num_blocks": 65536, 00:10:44.996 "uuid": "b7676f7d-0ab9-49c4-b602-9bec968d6cdb", 00:10:44.996 "assigned_rate_limits": { 00:10:44.996 "rw_ios_per_sec": 0, 00:10:44.996 "rw_mbytes_per_sec": 0, 00:10:44.996 "r_mbytes_per_sec": 0, 00:10:44.996 "w_mbytes_per_sec": 0 00:10:44.996 }, 00:10:44.996 "claimed": true, 00:10:44.996 "claim_type": "exclusive_write", 00:10:44.996 "zoned": false, 00:10:44.996 "supported_io_types": { 00:10:44.996 "read": true, 00:10:44.996 "write": true, 00:10:44.996 "unmap": true, 00:10:44.996 "flush": true, 00:10:44.996 "reset": true, 00:10:44.996 "nvme_admin": false, 00:10:44.996 "nvme_io": false, 00:10:44.996 "nvme_io_md": false, 00:10:44.996 "write_zeroes": true, 00:10:44.996 "zcopy": true, 00:10:44.996 "get_zone_info": false, 00:10:44.996 "zone_management": false, 00:10:44.996 "zone_append": false, 00:10:44.996 "compare": false, 00:10:44.996 "compare_and_write": false, 00:10:44.996 "abort": true, 00:10:44.996 "seek_hole": false, 00:10:44.996 "seek_data": false, 00:10:44.996 "copy": true, 00:10:44.996 "nvme_iov_md": false 00:10:44.996 }, 00:10:44.996 "memory_domains": [ 00:10:44.996 { 00:10:44.996 "dma_device_id": "system", 00:10:44.996 "dma_device_type": 1 00:10:44.996 }, 00:10:44.996 { 00:10:44.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.996 "dma_device_type": 2 00:10:44.996 } 00:10:44.996 ], 00:10:44.996 "driver_specific": {} 00:10:44.996 } 00:10:44.996 ] 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.996 11:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.259 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:45.259 "name": "Existed_Raid", 00:10:45.259 "uuid": "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac", 00:10:45.259 "strip_size_kb": 64, 00:10:45.259 "state": "configuring", 00:10:45.259 "raid_level": "raid0", 00:10:45.259 "superblock": true, 00:10:45.259 "num_base_bdevs": 3, 00:10:45.259 "num_base_bdevs_discovered": 2, 00:10:45.259 "num_base_bdevs_operational": 3, 00:10:45.259 "base_bdevs_list": [ 00:10:45.260 { 00:10:45.260 "name": "BaseBdev1", 00:10:45.260 "uuid": "b7676f7d-0ab9-49c4-b602-9bec968d6cdb", 00:10:45.260 "is_configured": true, 00:10:45.260 "data_offset": 2048, 00:10:45.260 "data_size": 63488 00:10:45.260 }, 00:10:45.260 { 00:10:45.260 "name": null, 00:10:45.260 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:45.260 "is_configured": false, 00:10:45.260 "data_offset": 2048, 00:10:45.260 "data_size": 63488 00:10:45.260 }, 00:10:45.260 { 00:10:45.260 "name": "BaseBdev3", 00:10:45.260 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:45.260 "is_configured": true, 00:10:45.260 "data_offset": 2048, 00:10:45.260 "data_size": 63488 00:10:45.260 } 00:10:45.260 ] 00:10:45.260 }' 00:10:45.260 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:45.260 11:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.209 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.209 11:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:46.209 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:46.209 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:46.467 [2024-07-25 11:22:02.278966] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.467 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:46.467 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:46.467 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:46.467 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:46.467 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:46.467 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:46.467 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:46.467 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:46.468 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:46.468 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:46.468 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.468 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.726 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:46.726 "name": "Existed_Raid", 00:10:46.726 "uuid": "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac", 00:10:46.726 "strip_size_kb": 64, 00:10:46.726 "state": "configuring", 00:10:46.726 "raid_level": "raid0", 00:10:46.726 "superblock": true, 00:10:46.726 "num_base_bdevs": 3, 00:10:46.726 "num_base_bdevs_discovered": 1, 00:10:46.726 "num_base_bdevs_operational": 3, 00:10:46.726 "base_bdevs_list": [ 00:10:46.726 { 00:10:46.726 "name": "BaseBdev1", 00:10:46.726 "uuid": "b7676f7d-0ab9-49c4-b602-9bec968d6cdb", 00:10:46.726 "is_configured": true, 00:10:46.726 "data_offset": 2048, 00:10:46.726 "data_size": 63488 00:10:46.726 }, 00:10:46.726 { 00:10:46.726 "name": null, 00:10:46.726 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:46.726 "is_configured": false, 00:10:46.726 "data_offset": 2048, 00:10:46.726 "data_size": 63488 00:10:46.726 }, 00:10:46.726 { 00:10:46.726 "name": null, 00:10:46.726 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:46.726 "is_configured": false, 00:10:46.726 "data_offset": 2048, 00:10:46.726 "data_size": 63488 00:10:46.726 } 00:10:46.726 ] 00:10:46.726 }' 00:10:46.726 11:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:46.726 11:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.662 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.662 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.662 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:47.662 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:47.920 [2024-07-25 11:22:03.799353] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.179 11:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.437 11:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:48.437 "name": "Existed_Raid", 00:10:48.437 "uuid": "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac", 00:10:48.437 "strip_size_kb": 64, 00:10:48.437 "state": "configuring", 00:10:48.437 "raid_level": "raid0", 00:10:48.437 "superblock": true, 00:10:48.437 "num_base_bdevs": 3, 00:10:48.437 "num_base_bdevs_discovered": 2, 00:10:48.437 "num_base_bdevs_operational": 3, 00:10:48.437 "base_bdevs_list": [ 00:10:48.437 { 00:10:48.437 "name": "BaseBdev1", 00:10:48.437 "uuid": "b7676f7d-0ab9-49c4-b602-9bec968d6cdb", 00:10:48.437 "is_configured": true, 00:10:48.437 "data_offset": 2048, 00:10:48.437 "data_size": 63488 00:10:48.437 }, 00:10:48.437 { 00:10:48.437 "name": null, 00:10:48.437 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:48.437 "is_configured": false, 00:10:48.437 "data_offset": 2048, 00:10:48.437 "data_size": 63488 00:10:48.437 }, 00:10:48.437 { 00:10:48.437 "name": "BaseBdev3", 00:10:48.437 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:48.437 "is_configured": true, 00:10:48.437 "data_offset": 2048, 00:10:48.437 "data_size": 63488 00:10:48.437 } 00:10:48.437 ] 00:10:48.437 }' 00:10:48.437 11:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:48.437 11:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.003 11:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.003 11:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.261 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:49.261 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:49.519 [2024-07-25 11:22:05.223812] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.519 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.777 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.777 "name": "Existed_Raid", 00:10:49.777 "uuid": "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac", 00:10:49.777 "strip_size_kb": 64, 00:10:49.777 "state": "configuring", 00:10:49.777 "raid_level": "raid0", 00:10:49.777 "superblock": true, 00:10:49.777 "num_base_bdevs": 3, 00:10:49.777 "num_base_bdevs_discovered": 1, 00:10:49.777 "num_base_bdevs_operational": 3, 00:10:49.777 "base_bdevs_list": [ 00:10:49.777 { 00:10:49.777 "name": null, 00:10:49.777 "uuid": "b7676f7d-0ab9-49c4-b602-9bec968d6cdb", 00:10:49.777 "is_configured": false, 00:10:49.777 "data_offset": 2048, 00:10:49.777 "data_size": 63488 00:10:49.777 }, 00:10:49.777 { 00:10:49.777 "name": null, 00:10:49.777 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:49.777 "is_configured": false, 00:10:49.777 "data_offset": 2048, 00:10:49.777 "data_size": 63488 00:10:49.777 }, 00:10:49.777 { 00:10:49.777 "name": "BaseBdev3", 00:10:49.777 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:49.777 "is_configured": true, 00:10:49.777 "data_offset": 2048, 00:10:49.777 "data_size": 63488 00:10:49.777 } 00:10:49.777 ] 00:10:49.777 }' 00:10:49.777 11:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.777 11:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.343 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.343 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.601 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:50.601 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:50.859 [2024-07-25 11:22:06.717437] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.859 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:50.859 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:50.859 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:50.859 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:50.859 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:50.859 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:50.859 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:50.859 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:50.859 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:50.859 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:51.117 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.117 11:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.375 11:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:51.375 "name": "Existed_Raid", 00:10:51.375 "uuid": "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac", 00:10:51.375 "strip_size_kb": 64, 00:10:51.375 "state": "configuring", 00:10:51.375 "raid_level": "raid0", 00:10:51.375 "superblock": true, 00:10:51.375 "num_base_bdevs": 3, 00:10:51.375 "num_base_bdevs_discovered": 2, 00:10:51.375 "num_base_bdevs_operational": 3, 00:10:51.375 "base_bdevs_list": [ 00:10:51.375 { 00:10:51.375 "name": null, 00:10:51.375 "uuid": "b7676f7d-0ab9-49c4-b602-9bec968d6cdb", 00:10:51.375 "is_configured": false, 00:10:51.375 "data_offset": 2048, 00:10:51.375 "data_size": 63488 00:10:51.375 }, 00:10:51.375 { 00:10:51.375 "name": "BaseBdev2", 00:10:51.375 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:51.375 "is_configured": true, 00:10:51.375 "data_offset": 2048, 00:10:51.375 "data_size": 63488 00:10:51.375 }, 00:10:51.375 { 00:10:51.375 "name": "BaseBdev3", 00:10:51.375 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:51.375 "is_configured": true, 00:10:51.375 "data_offset": 2048, 00:10:51.375 "data_size": 63488 00:10:51.375 } 00:10:51.375 ] 00:10:51.375 }' 00:10:51.375 11:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:51.375 11:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.942 11:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:51.942 11:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.200 11:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:52.200 11:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.200 11:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:52.458 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b7676f7d-0ab9-49c4-b602-9bec968d6cdb 00:10:52.716 [2024-07-25 11:22:08.460914] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:52.716 [2024-07-25 11:22:08.461226] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:52.716 [2024-07-25 11:22:08.461245] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:52.716 [2024-07-25 11:22:08.461582] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:52.716 [2024-07-25 11:22:08.461809] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:52.716 [2024-07-25 11:22:08.461832] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:52.716 NewBaseBdev 00:10:52.716 [2024-07-25 11:22:08.462002] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.716 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:52.716 11:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:52.716 11:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:52.716 11:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:52.716 11:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:52.716 11:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:52.716 11:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:52.974 11:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:53.232 [ 00:10:53.232 { 00:10:53.232 "name": "NewBaseBdev", 00:10:53.232 "aliases": [ 00:10:53.232 "b7676f7d-0ab9-49c4-b602-9bec968d6cdb" 00:10:53.232 ], 00:10:53.232 "product_name": "Malloc disk", 00:10:53.232 "block_size": 512, 00:10:53.232 "num_blocks": 65536, 00:10:53.232 "uuid": "b7676f7d-0ab9-49c4-b602-9bec968d6cdb", 00:10:53.232 "assigned_rate_limits": { 00:10:53.232 "rw_ios_per_sec": 0, 00:10:53.232 "rw_mbytes_per_sec": 0, 00:10:53.232 "r_mbytes_per_sec": 0, 00:10:53.232 "w_mbytes_per_sec": 0 00:10:53.232 }, 00:10:53.232 "claimed": true, 00:10:53.232 "claim_type": "exclusive_write", 00:10:53.232 "zoned": false, 00:10:53.232 "supported_io_types": { 00:10:53.232 "read": true, 00:10:53.232 "write": true, 00:10:53.232 "unmap": true, 00:10:53.232 "flush": true, 00:10:53.232 "reset": true, 00:10:53.232 "nvme_admin": false, 00:10:53.232 "nvme_io": false, 00:10:53.232 "nvme_io_md": false, 00:10:53.232 "write_zeroes": true, 00:10:53.232 "zcopy": true, 00:10:53.232 "get_zone_info": false, 00:10:53.232 "zone_management": false, 00:10:53.232 "zone_append": false, 00:10:53.232 "compare": false, 00:10:53.232 "compare_and_write": false, 00:10:53.232 "abort": true, 00:10:53.232 "seek_hole": false, 00:10:53.232 "seek_data": false, 00:10:53.232 "copy": true, 00:10:53.232 "nvme_iov_md": false 00:10:53.232 }, 00:10:53.232 "memory_domains": [ 00:10:53.232 { 00:10:53.232 "dma_device_id": "system", 00:10:53.233 "dma_device_type": 1 00:10:53.233 }, 00:10:53.233 { 00:10:53.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.233 "dma_device_type": 2 00:10:53.233 } 00:10:53.233 ], 00:10:53.233 "driver_specific": {} 00:10:53.233 } 00:10:53.233 ] 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.233 11:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.491 11:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.491 "name": "Existed_Raid", 00:10:53.491 "uuid": "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac", 00:10:53.491 "strip_size_kb": 64, 00:10:53.491 "state": "online", 00:10:53.491 "raid_level": "raid0", 00:10:53.491 "superblock": true, 00:10:53.491 "num_base_bdevs": 3, 00:10:53.491 "num_base_bdevs_discovered": 3, 00:10:53.491 "num_base_bdevs_operational": 3, 00:10:53.491 "base_bdevs_list": [ 00:10:53.491 { 00:10:53.491 "name": "NewBaseBdev", 00:10:53.491 "uuid": "b7676f7d-0ab9-49c4-b602-9bec968d6cdb", 00:10:53.491 "is_configured": true, 00:10:53.491 "data_offset": 2048, 00:10:53.491 "data_size": 63488 00:10:53.491 }, 00:10:53.491 { 00:10:53.491 "name": "BaseBdev2", 00:10:53.491 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:53.491 "is_configured": true, 00:10:53.491 "data_offset": 2048, 00:10:53.491 "data_size": 63488 00:10:53.491 }, 00:10:53.491 { 00:10:53.491 "name": "BaseBdev3", 00:10:53.491 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:53.491 "is_configured": true, 00:10:53.491 "data_offset": 2048, 00:10:53.491 "data_size": 63488 00:10:53.491 } 00:10:53.491 ] 00:10:53.491 }' 00:10:53.491 11:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.491 11:22:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.056 11:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.056 11:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:54.056 11:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:54.056 11:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:54.056 11:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:54.057 11:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:54.057 11:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:54.057 11:22:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:54.314 [2024-07-25 11:22:10.073913] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.314 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:54.314 "name": "Existed_Raid", 00:10:54.314 "aliases": [ 00:10:54.314 "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac" 00:10:54.314 ], 00:10:54.314 "product_name": "Raid Volume", 00:10:54.314 "block_size": 512, 00:10:54.314 "num_blocks": 190464, 00:10:54.314 "uuid": "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac", 00:10:54.314 "assigned_rate_limits": { 00:10:54.314 "rw_ios_per_sec": 0, 00:10:54.314 "rw_mbytes_per_sec": 0, 00:10:54.314 "r_mbytes_per_sec": 0, 00:10:54.314 "w_mbytes_per_sec": 0 00:10:54.314 }, 00:10:54.314 "claimed": false, 00:10:54.314 "zoned": false, 00:10:54.314 "supported_io_types": { 00:10:54.314 "read": true, 00:10:54.314 "write": true, 00:10:54.314 "unmap": true, 00:10:54.314 "flush": true, 00:10:54.314 "reset": true, 00:10:54.314 "nvme_admin": false, 00:10:54.314 "nvme_io": false, 00:10:54.314 "nvme_io_md": false, 00:10:54.314 "write_zeroes": true, 00:10:54.314 "zcopy": false, 00:10:54.314 "get_zone_info": false, 00:10:54.314 "zone_management": false, 00:10:54.314 "zone_append": false, 00:10:54.314 "compare": false, 00:10:54.314 "compare_and_write": false, 00:10:54.314 "abort": false, 00:10:54.314 "seek_hole": false, 00:10:54.314 "seek_data": false, 00:10:54.314 "copy": false, 00:10:54.314 "nvme_iov_md": false 00:10:54.314 }, 00:10:54.314 "memory_domains": [ 00:10:54.314 { 00:10:54.314 "dma_device_id": "system", 00:10:54.314 "dma_device_type": 1 00:10:54.314 }, 00:10:54.314 { 00:10:54.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.314 "dma_device_type": 2 00:10:54.314 }, 00:10:54.314 { 00:10:54.314 "dma_device_id": "system", 00:10:54.314 "dma_device_type": 1 00:10:54.314 }, 00:10:54.314 { 00:10:54.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.314 "dma_device_type": 2 00:10:54.314 }, 00:10:54.314 { 00:10:54.314 "dma_device_id": "system", 00:10:54.314 "dma_device_type": 1 00:10:54.314 }, 00:10:54.314 { 00:10:54.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.314 "dma_device_type": 2 00:10:54.314 } 00:10:54.314 ], 00:10:54.314 "driver_specific": { 00:10:54.314 "raid": { 00:10:54.314 "uuid": "a02d8119-ae00-4ba9-89d2-03f7ab0b53ac", 00:10:54.314 "strip_size_kb": 64, 00:10:54.314 "state": "online", 00:10:54.314 "raid_level": "raid0", 00:10:54.314 "superblock": true, 00:10:54.314 "num_base_bdevs": 3, 00:10:54.314 "num_base_bdevs_discovered": 3, 00:10:54.314 "num_base_bdevs_operational": 3, 00:10:54.314 "base_bdevs_list": [ 00:10:54.314 { 00:10:54.314 "name": "NewBaseBdev", 00:10:54.314 "uuid": "b7676f7d-0ab9-49c4-b602-9bec968d6cdb", 00:10:54.314 "is_configured": true, 00:10:54.314 "data_offset": 2048, 00:10:54.314 "data_size": 63488 00:10:54.314 }, 00:10:54.314 { 00:10:54.314 "name": "BaseBdev2", 00:10:54.314 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:54.314 "is_configured": true, 00:10:54.314 "data_offset": 2048, 00:10:54.314 "data_size": 63488 00:10:54.314 }, 00:10:54.314 { 00:10:54.314 "name": "BaseBdev3", 00:10:54.314 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:54.314 "is_configured": true, 00:10:54.314 "data_offset": 2048, 00:10:54.314 "data_size": 63488 00:10:54.314 } 00:10:54.314 ] 00:10:54.314 } 00:10:54.314 } 00:10:54.314 }' 00:10:54.314 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.314 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:54.314 BaseBdev2 00:10:54.314 BaseBdev3' 00:10:54.314 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:54.314 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:54.314 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:54.572 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:54.572 "name": "NewBaseBdev", 00:10:54.572 "aliases": [ 00:10:54.572 "b7676f7d-0ab9-49c4-b602-9bec968d6cdb" 00:10:54.572 ], 00:10:54.572 "product_name": "Malloc disk", 00:10:54.572 "block_size": 512, 00:10:54.572 "num_blocks": 65536, 00:10:54.572 "uuid": "b7676f7d-0ab9-49c4-b602-9bec968d6cdb", 00:10:54.572 "assigned_rate_limits": { 00:10:54.572 "rw_ios_per_sec": 0, 00:10:54.572 "rw_mbytes_per_sec": 0, 00:10:54.572 "r_mbytes_per_sec": 0, 00:10:54.572 "w_mbytes_per_sec": 0 00:10:54.572 }, 00:10:54.572 "claimed": true, 00:10:54.572 "claim_type": "exclusive_write", 00:10:54.572 "zoned": false, 00:10:54.572 "supported_io_types": { 00:10:54.572 "read": true, 00:10:54.572 "write": true, 00:10:54.572 "unmap": true, 00:10:54.572 "flush": true, 00:10:54.572 "reset": true, 00:10:54.572 "nvme_admin": false, 00:10:54.572 "nvme_io": false, 00:10:54.572 "nvme_io_md": false, 00:10:54.572 "write_zeroes": true, 00:10:54.572 "zcopy": true, 00:10:54.572 "get_zone_info": false, 00:10:54.572 "zone_management": false, 00:10:54.572 "zone_append": false, 00:10:54.572 "compare": false, 00:10:54.572 "compare_and_write": false, 00:10:54.572 "abort": true, 00:10:54.572 "seek_hole": false, 00:10:54.572 "seek_data": false, 00:10:54.572 "copy": true, 00:10:54.572 "nvme_iov_md": false 00:10:54.572 }, 00:10:54.572 "memory_domains": [ 00:10:54.572 { 00:10:54.572 "dma_device_id": "system", 00:10:54.572 "dma_device_type": 1 00:10:54.572 }, 00:10:54.572 { 00:10:54.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.572 "dma_device_type": 2 00:10:54.572 } 00:10:54.572 ], 00:10:54.572 "driver_specific": {} 00:10:54.572 }' 00:10:54.572 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.831 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.831 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:54.831 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.831 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.831 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:54.831 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.831 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:55.090 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:55.090 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:55.090 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:55.090 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:55.090 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:55.090 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:55.090 11:22:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:55.348 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:55.348 "name": "BaseBdev2", 00:10:55.348 "aliases": [ 00:10:55.348 "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9" 00:10:55.348 ], 00:10:55.348 "product_name": "Malloc disk", 00:10:55.348 "block_size": 512, 00:10:55.348 "num_blocks": 65536, 00:10:55.348 "uuid": "1a5a08c7-c1d8-4c55-b6f2-fad4a5e8ddf9", 00:10:55.348 "assigned_rate_limits": { 00:10:55.348 "rw_ios_per_sec": 0, 00:10:55.348 "rw_mbytes_per_sec": 0, 00:10:55.349 "r_mbytes_per_sec": 0, 00:10:55.349 "w_mbytes_per_sec": 0 00:10:55.349 }, 00:10:55.349 "claimed": true, 00:10:55.349 "claim_type": "exclusive_write", 00:10:55.349 "zoned": false, 00:10:55.349 "supported_io_types": { 00:10:55.349 "read": true, 00:10:55.349 "write": true, 00:10:55.349 "unmap": true, 00:10:55.349 "flush": true, 00:10:55.349 "reset": true, 00:10:55.349 "nvme_admin": false, 00:10:55.349 "nvme_io": false, 00:10:55.349 "nvme_io_md": false, 00:10:55.349 "write_zeroes": true, 00:10:55.349 "zcopy": true, 00:10:55.349 "get_zone_info": false, 00:10:55.349 "zone_management": false, 00:10:55.349 "zone_append": false, 00:10:55.349 "compare": false, 00:10:55.349 "compare_and_write": false, 00:10:55.349 "abort": true, 00:10:55.349 "seek_hole": false, 00:10:55.349 "seek_data": false, 00:10:55.349 "copy": true, 00:10:55.349 "nvme_iov_md": false 00:10:55.349 }, 00:10:55.349 "memory_domains": [ 00:10:55.349 { 00:10:55.349 "dma_device_id": "system", 00:10:55.349 "dma_device_type": 1 00:10:55.349 }, 00:10:55.349 { 00:10:55.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.349 "dma_device_type": 2 00:10:55.349 } 00:10:55.349 ], 00:10:55.349 "driver_specific": {} 00:10:55.349 }' 00:10:55.349 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:55.349 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:55.349 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:55.349 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:55.349 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:55.607 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:55.607 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:55.607 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:55.607 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:55.607 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:55.607 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:55.607 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:55.607 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:55.607 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:55.607 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:55.865 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:55.865 "name": "BaseBdev3", 00:10:55.865 "aliases": [ 00:10:55.865 "5c91154e-6379-4c40-9f9e-13f9d000ab68" 00:10:55.865 ], 00:10:55.865 "product_name": "Malloc disk", 00:10:55.865 "block_size": 512, 00:10:55.865 "num_blocks": 65536, 00:10:55.865 "uuid": "5c91154e-6379-4c40-9f9e-13f9d000ab68", 00:10:55.865 "assigned_rate_limits": { 00:10:55.865 "rw_ios_per_sec": 0, 00:10:55.865 "rw_mbytes_per_sec": 0, 00:10:55.865 "r_mbytes_per_sec": 0, 00:10:55.865 "w_mbytes_per_sec": 0 00:10:55.865 }, 00:10:55.865 "claimed": true, 00:10:55.865 "claim_type": "exclusive_write", 00:10:55.865 "zoned": false, 00:10:55.865 "supported_io_types": { 00:10:55.865 "read": true, 00:10:55.865 "write": true, 00:10:55.865 "unmap": true, 00:10:55.865 "flush": true, 00:10:55.865 "reset": true, 00:10:55.865 "nvme_admin": false, 00:10:55.865 "nvme_io": false, 00:10:55.865 "nvme_io_md": false, 00:10:55.865 "write_zeroes": true, 00:10:55.865 "zcopy": true, 00:10:55.865 "get_zone_info": false, 00:10:55.865 "zone_management": false, 00:10:55.865 "zone_append": false, 00:10:55.865 "compare": false, 00:10:55.865 "compare_and_write": false, 00:10:55.865 "abort": true, 00:10:55.865 "seek_hole": false, 00:10:55.865 "seek_data": false, 00:10:55.865 "copy": true, 00:10:55.865 "nvme_iov_md": false 00:10:55.865 }, 00:10:55.865 "memory_domains": [ 00:10:55.865 { 00:10:55.865 "dma_device_id": "system", 00:10:55.865 "dma_device_type": 1 00:10:55.865 }, 00:10:55.865 { 00:10:55.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.865 "dma_device_type": 2 00:10:55.865 } 00:10:55.865 ], 00:10:55.865 "driver_specific": {} 00:10:55.865 }' 00:10:55.865 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.123 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.123 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:56.123 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.123 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.123 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:56.124 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.124 11:22:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.382 11:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:56.382 11:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.382 11:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.382 11:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:56.382 11:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:56.640 [2024-07-25 11:22:12.410178] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.640 [2024-07-25 11:22:12.410254] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.640 [2024-07-25 11:22:12.410374] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.640 [2024-07-25 11:22:12.410468] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.640 [2024-07-25 11:22:12.410498] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 68222 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68222 ']' 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68222 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68222 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.640 killing process with pid 68222 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68222' 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68222 00:10:56.640 [2024-07-25 11:22:12.452487] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.640 11:22:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68222 00:10:56.898 [2024-07-25 11:22:12.736391] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.272 ************************************ 00:10:58.272 END TEST raid_state_function_test_sb 00:10:58.272 ************************************ 00:10:58.272 11:22:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:58.272 00:10:58.272 real 0m32.550s 00:10:58.272 user 0m59.620s 00:10:58.272 sys 0m4.056s 00:10:58.272 11:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.272 11:22:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.272 11:22:14 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:58.272 11:22:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:58.272 11:22:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.272 11:22:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.272 ************************************ 00:10:58.272 START TEST raid_superblock_test 00:10:58.272 ************************************ 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=69202 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 69202 /var/tmp/spdk-raid.sock 00:10:58.272 11:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 69202 ']' 00:10:58.273 11:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:58.273 11:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:58.273 11:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:58.273 11:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.273 11:22:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.273 11:22:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:58.531 [2024-07-25 11:22:14.168106] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:10:58.531 [2024-07-25 11:22:14.168276] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69202 ] 00:10:58.531 [2024-07-25 11:22:14.345413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.789 [2024-07-25 11:22:14.618522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.048 [2024-07-25 11:22:14.818695] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.048 [2024-07-25 11:22:14.818778] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.307 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:59.566 malloc1 00:10:59.566 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:00.133 [2024-07-25 11:22:15.707155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:00.133 [2024-07-25 11:22:15.707319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.133 [2024-07-25 11:22:15.707357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:00.133 [2024-07-25 11:22:15.707378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.133 [2024-07-25 11:22:15.710485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.133 [2024-07-25 11:22:15.710535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:00.133 pt1 00:11:00.133 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:11:00.133 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:00.133 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:11:00.133 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:11:00.133 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:00.133 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:00.133 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:11:00.133 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:00.133 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:00.133 malloc2 00:11:00.133 11:22:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.417 [2024-07-25 11:22:16.203701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.417 [2024-07-25 11:22:16.203833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.417 [2024-07-25 11:22:16.203869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:00.417 [2024-07-25 11:22:16.203893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.417 [2024-07-25 11:22:16.206913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.417 [2024-07-25 11:22:16.206993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.417 pt2 00:11:00.417 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:11:00.417 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:00.417 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:11:00.417 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:11:00.417 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:00.417 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:00.417 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:11:00.418 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:00.418 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:00.676 malloc3 00:11:00.676 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.934 [2024-07-25 11:22:16.696657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.934 [2024-07-25 11:22:16.696781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.934 [2024-07-25 11:22:16.696817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:00.934 [2024-07-25 11:22:16.696837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.934 [2024-07-25 11:22:16.699848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.934 [2024-07-25 11:22:16.699897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.934 pt3 00:11:00.934 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:11:00.934 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:00.934 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:01.192 [2024-07-25 11:22:16.920843] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:01.192 [2024-07-25 11:22:16.923480] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.192 [2024-07-25 11:22:16.923574] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:01.192 [2024-07-25 11:22:16.923850] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:01.192 [2024-07-25 11:22:16.923881] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:01.192 [2024-07-25 11:22:16.924352] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:01.192 [2024-07-25 11:22:16.924654] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:01.192 [2024-07-25 11:22:16.924708] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:01.192 [2024-07-25 11:22:16.924992] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.192 11:22:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.468 11:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:01.468 "name": "raid_bdev1", 00:11:01.468 "uuid": "301bed4e-f006-4b29-aab2-94d9abbb9c0e", 00:11:01.468 "strip_size_kb": 64, 00:11:01.468 "state": "online", 00:11:01.468 "raid_level": "raid0", 00:11:01.468 "superblock": true, 00:11:01.468 "num_base_bdevs": 3, 00:11:01.468 "num_base_bdevs_discovered": 3, 00:11:01.468 "num_base_bdevs_operational": 3, 00:11:01.468 "base_bdevs_list": [ 00:11:01.468 { 00:11:01.468 "name": "pt1", 00:11:01.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.468 "is_configured": true, 00:11:01.468 "data_offset": 2048, 00:11:01.468 "data_size": 63488 00:11:01.468 }, 00:11:01.468 { 00:11:01.468 "name": "pt2", 00:11:01.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.468 "is_configured": true, 00:11:01.468 "data_offset": 2048, 00:11:01.468 "data_size": 63488 00:11:01.468 }, 00:11:01.468 { 00:11:01.468 "name": "pt3", 00:11:01.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.468 "is_configured": true, 00:11:01.468 "data_offset": 2048, 00:11:01.468 "data_size": 63488 00:11:01.468 } 00:11:01.468 ] 00:11:01.468 }' 00:11:01.468 11:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:01.468 11:22:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.036 11:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:11:02.036 11:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:02.036 11:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:02.036 11:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:02.036 11:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:02.036 11:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:02.036 11:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:02.036 11:22:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:02.294 [2024-07-25 11:22:18.097798] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.294 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:02.294 "name": "raid_bdev1", 00:11:02.294 "aliases": [ 00:11:02.294 "301bed4e-f006-4b29-aab2-94d9abbb9c0e" 00:11:02.294 ], 00:11:02.294 "product_name": "Raid Volume", 00:11:02.294 "block_size": 512, 00:11:02.294 "num_blocks": 190464, 00:11:02.294 "uuid": "301bed4e-f006-4b29-aab2-94d9abbb9c0e", 00:11:02.294 "assigned_rate_limits": { 00:11:02.294 "rw_ios_per_sec": 0, 00:11:02.294 "rw_mbytes_per_sec": 0, 00:11:02.294 "r_mbytes_per_sec": 0, 00:11:02.294 "w_mbytes_per_sec": 0 00:11:02.294 }, 00:11:02.294 "claimed": false, 00:11:02.295 "zoned": false, 00:11:02.295 "supported_io_types": { 00:11:02.295 "read": true, 00:11:02.295 "write": true, 00:11:02.295 "unmap": true, 00:11:02.295 "flush": true, 00:11:02.295 "reset": true, 00:11:02.295 "nvme_admin": false, 00:11:02.295 "nvme_io": false, 00:11:02.295 "nvme_io_md": false, 00:11:02.295 "write_zeroes": true, 00:11:02.295 "zcopy": false, 00:11:02.295 "get_zone_info": false, 00:11:02.295 "zone_management": false, 00:11:02.295 "zone_append": false, 00:11:02.295 "compare": false, 00:11:02.295 "compare_and_write": false, 00:11:02.295 "abort": false, 00:11:02.295 "seek_hole": false, 00:11:02.295 "seek_data": false, 00:11:02.295 "copy": false, 00:11:02.295 "nvme_iov_md": false 00:11:02.295 }, 00:11:02.295 "memory_domains": [ 00:11:02.295 { 00:11:02.295 "dma_device_id": "system", 00:11:02.295 "dma_device_type": 1 00:11:02.295 }, 00:11:02.295 { 00:11:02.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.295 "dma_device_type": 2 00:11:02.295 }, 00:11:02.295 { 00:11:02.295 "dma_device_id": "system", 00:11:02.295 "dma_device_type": 1 00:11:02.295 }, 00:11:02.295 { 00:11:02.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.295 "dma_device_type": 2 00:11:02.295 }, 00:11:02.295 { 00:11:02.295 "dma_device_id": "system", 00:11:02.295 "dma_device_type": 1 00:11:02.295 }, 00:11:02.295 { 00:11:02.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.295 "dma_device_type": 2 00:11:02.295 } 00:11:02.295 ], 00:11:02.295 "driver_specific": { 00:11:02.295 "raid": { 00:11:02.295 "uuid": "301bed4e-f006-4b29-aab2-94d9abbb9c0e", 00:11:02.295 "strip_size_kb": 64, 00:11:02.295 "state": "online", 00:11:02.295 "raid_level": "raid0", 00:11:02.295 "superblock": true, 00:11:02.295 "num_base_bdevs": 3, 00:11:02.295 "num_base_bdevs_discovered": 3, 00:11:02.295 "num_base_bdevs_operational": 3, 00:11:02.295 "base_bdevs_list": [ 00:11:02.295 { 00:11:02.295 "name": "pt1", 00:11:02.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.295 "is_configured": true, 00:11:02.295 "data_offset": 2048, 00:11:02.295 "data_size": 63488 00:11:02.295 }, 00:11:02.295 { 00:11:02.295 "name": "pt2", 00:11:02.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.295 "is_configured": true, 00:11:02.295 "data_offset": 2048, 00:11:02.295 "data_size": 63488 00:11:02.295 }, 00:11:02.295 { 00:11:02.295 "name": "pt3", 00:11:02.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.295 "is_configured": true, 00:11:02.295 "data_offset": 2048, 00:11:02.295 "data_size": 63488 00:11:02.295 } 00:11:02.295 ] 00:11:02.295 } 00:11:02.295 } 00:11:02.295 }' 00:11:02.295 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.295 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:02.295 pt2 00:11:02.295 pt3' 00:11:02.295 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:02.553 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:02.553 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:02.812 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:02.812 "name": "pt1", 00:11:02.812 "aliases": [ 00:11:02.812 "00000000-0000-0000-0000-000000000001" 00:11:02.812 ], 00:11:02.812 "product_name": "passthru", 00:11:02.812 "block_size": 512, 00:11:02.812 "num_blocks": 65536, 00:11:02.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.812 "assigned_rate_limits": { 00:11:02.812 "rw_ios_per_sec": 0, 00:11:02.812 "rw_mbytes_per_sec": 0, 00:11:02.812 "r_mbytes_per_sec": 0, 00:11:02.812 "w_mbytes_per_sec": 0 00:11:02.812 }, 00:11:02.812 "claimed": true, 00:11:02.812 "claim_type": "exclusive_write", 00:11:02.812 "zoned": false, 00:11:02.812 "supported_io_types": { 00:11:02.812 "read": true, 00:11:02.812 "write": true, 00:11:02.812 "unmap": true, 00:11:02.812 "flush": true, 00:11:02.812 "reset": true, 00:11:02.812 "nvme_admin": false, 00:11:02.812 "nvme_io": false, 00:11:02.812 "nvme_io_md": false, 00:11:02.812 "write_zeroes": true, 00:11:02.812 "zcopy": true, 00:11:02.812 "get_zone_info": false, 00:11:02.812 "zone_management": false, 00:11:02.812 "zone_append": false, 00:11:02.812 "compare": false, 00:11:02.812 "compare_and_write": false, 00:11:02.812 "abort": true, 00:11:02.812 "seek_hole": false, 00:11:02.812 "seek_data": false, 00:11:02.812 "copy": true, 00:11:02.812 "nvme_iov_md": false 00:11:02.812 }, 00:11:02.812 "memory_domains": [ 00:11:02.812 { 00:11:02.812 "dma_device_id": "system", 00:11:02.812 "dma_device_type": 1 00:11:02.812 }, 00:11:02.812 { 00:11:02.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.812 "dma_device_type": 2 00:11:02.812 } 00:11:02.812 ], 00:11:02.812 "driver_specific": { 00:11:02.812 "passthru": { 00:11:02.812 "name": "pt1", 00:11:02.812 "base_bdev_name": "malloc1" 00:11:02.812 } 00:11:02.812 } 00:11:02.812 }' 00:11:02.812 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:02.812 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:02.812 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:02.812 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.812 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:02.812 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:02.812 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:03.071 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:03.071 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:03.071 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:03.071 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:03.071 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:03.071 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:03.071 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:03.071 11:22:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:03.329 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:03.329 "name": "pt2", 00:11:03.329 "aliases": [ 00:11:03.329 "00000000-0000-0000-0000-000000000002" 00:11:03.329 ], 00:11:03.329 "product_name": "passthru", 00:11:03.329 "block_size": 512, 00:11:03.329 "num_blocks": 65536, 00:11:03.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.329 "assigned_rate_limits": { 00:11:03.329 "rw_ios_per_sec": 0, 00:11:03.329 "rw_mbytes_per_sec": 0, 00:11:03.329 "r_mbytes_per_sec": 0, 00:11:03.329 "w_mbytes_per_sec": 0 00:11:03.329 }, 00:11:03.329 "claimed": true, 00:11:03.329 "claim_type": "exclusive_write", 00:11:03.329 "zoned": false, 00:11:03.329 "supported_io_types": { 00:11:03.329 "read": true, 00:11:03.329 "write": true, 00:11:03.329 "unmap": true, 00:11:03.329 "flush": true, 00:11:03.329 "reset": true, 00:11:03.329 "nvme_admin": false, 00:11:03.329 "nvme_io": false, 00:11:03.329 "nvme_io_md": false, 00:11:03.329 "write_zeroes": true, 00:11:03.329 "zcopy": true, 00:11:03.329 "get_zone_info": false, 00:11:03.329 "zone_management": false, 00:11:03.329 "zone_append": false, 00:11:03.329 "compare": false, 00:11:03.329 "compare_and_write": false, 00:11:03.329 "abort": true, 00:11:03.329 "seek_hole": false, 00:11:03.329 "seek_data": false, 00:11:03.329 "copy": true, 00:11:03.329 "nvme_iov_md": false 00:11:03.329 }, 00:11:03.329 "memory_domains": [ 00:11:03.329 { 00:11:03.329 "dma_device_id": "system", 00:11:03.329 "dma_device_type": 1 00:11:03.329 }, 00:11:03.329 { 00:11:03.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.329 "dma_device_type": 2 00:11:03.329 } 00:11:03.329 ], 00:11:03.329 "driver_specific": { 00:11:03.329 "passthru": { 00:11:03.329 "name": "pt2", 00:11:03.329 "base_bdev_name": "malloc2" 00:11:03.329 } 00:11:03.329 } 00:11:03.329 }' 00:11:03.329 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:03.329 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:03.587 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:03.587 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:03.587 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:03.587 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:03.587 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:03.587 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:03.587 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:03.587 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:03.845 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:03.845 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:03.845 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:03.845 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:03.845 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:04.104 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:04.104 "name": "pt3", 00:11:04.104 "aliases": [ 00:11:04.104 "00000000-0000-0000-0000-000000000003" 00:11:04.104 ], 00:11:04.104 "product_name": "passthru", 00:11:04.104 "block_size": 512, 00:11:04.104 "num_blocks": 65536, 00:11:04.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.104 "assigned_rate_limits": { 00:11:04.104 "rw_ios_per_sec": 0, 00:11:04.104 "rw_mbytes_per_sec": 0, 00:11:04.104 "r_mbytes_per_sec": 0, 00:11:04.104 "w_mbytes_per_sec": 0 00:11:04.104 }, 00:11:04.104 "claimed": true, 00:11:04.104 "claim_type": "exclusive_write", 00:11:04.104 "zoned": false, 00:11:04.104 "supported_io_types": { 00:11:04.104 "read": true, 00:11:04.104 "write": true, 00:11:04.104 "unmap": true, 00:11:04.104 "flush": true, 00:11:04.104 "reset": true, 00:11:04.104 "nvme_admin": false, 00:11:04.104 "nvme_io": false, 00:11:04.104 "nvme_io_md": false, 00:11:04.104 "write_zeroes": true, 00:11:04.104 "zcopy": true, 00:11:04.104 "get_zone_info": false, 00:11:04.104 "zone_management": false, 00:11:04.104 "zone_append": false, 00:11:04.104 "compare": false, 00:11:04.104 "compare_and_write": false, 00:11:04.104 "abort": true, 00:11:04.104 "seek_hole": false, 00:11:04.104 "seek_data": false, 00:11:04.104 "copy": true, 00:11:04.104 "nvme_iov_md": false 00:11:04.104 }, 00:11:04.104 "memory_domains": [ 00:11:04.104 { 00:11:04.104 "dma_device_id": "system", 00:11:04.104 "dma_device_type": 1 00:11:04.104 }, 00:11:04.104 { 00:11:04.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.104 "dma_device_type": 2 00:11:04.104 } 00:11:04.104 ], 00:11:04.104 "driver_specific": { 00:11:04.104 "passthru": { 00:11:04.104 "name": "pt3", 00:11:04.104 "base_bdev_name": "malloc3" 00:11:04.104 } 00:11:04.104 } 00:11:04.104 }' 00:11:04.104 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:04.104 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:04.104 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:04.104 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:04.104 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:04.363 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:04.363 11:22:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:04.363 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:04.363 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:04.363 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:04.363 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:04.363 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:04.363 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:04.363 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:11:04.621 [2024-07-25 11:22:20.386387] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.621 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=301bed4e-f006-4b29-aab2-94d9abbb9c0e 00:11:04.621 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 301bed4e-f006-4b29-aab2-94d9abbb9c0e ']' 00:11:04.621 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:04.880 [2024-07-25 11:22:20.614085] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.880 [2024-07-25 11:22:20.614154] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.880 [2024-07-25 11:22:20.614286] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.880 [2024-07-25 11:22:20.614391] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.880 [2024-07-25 11:22:20.614411] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:04.880 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.880 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:11:05.139 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:11:05.139 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:11:05.139 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:11:05.139 11:22:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:05.397 11:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:11:05.397 11:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:05.657 11:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:11:05.657 11:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:05.915 11:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:05.915 11:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:06.174 11:22:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:06.174 [2024-07-25 11:22:22.042469] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:06.174 [2024-07-25 11:22:22.045129] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:06.175 [2024-07-25 11:22:22.045210] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:06.175 [2024-07-25 11:22:22.045301] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:06.175 [2024-07-25 11:22:22.045389] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:06.175 [2024-07-25 11:22:22.045428] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:06.175 [2024-07-25 11:22:22.045463] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.175 [2024-07-25 11:22:22.045483] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:06.175 request: 00:11:06.175 { 00:11:06.175 "name": "raid_bdev1", 00:11:06.175 "raid_level": "raid0", 00:11:06.175 "base_bdevs": [ 00:11:06.175 "malloc1", 00:11:06.175 "malloc2", 00:11:06.175 "malloc3" 00:11:06.175 ], 00:11:06.175 "strip_size_kb": 64, 00:11:06.175 "superblock": false, 00:11:06.175 "method": "bdev_raid_create", 00:11:06.175 "req_id": 1 00:11:06.175 } 00:11:06.175 Got JSON-RPC error response 00:11:06.175 response: 00:11:06.175 { 00:11:06.175 "code": -17, 00:11:06.175 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:06.175 } 00:11:06.433 11:22:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:06.433 11:22:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:06.433 11:22:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:06.433 11:22:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:06.433 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.433 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:11:06.692 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:11:06.692 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:11:06.692 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:06.950 [2024-07-25 11:22:22.598084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:06.950 [2024-07-25 11:22:22.598221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.950 [2024-07-25 11:22:22.598255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:06.950 [2024-07-25 11:22:22.598274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.950 [2024-07-25 11:22:22.601415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.950 [2024-07-25 11:22:22.601472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:06.950 [2024-07-25 11:22:22.601608] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:06.950 [2024-07-25 11:22:22.601745] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:06.950 pt1 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.950 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.209 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:07.209 "name": "raid_bdev1", 00:11:07.209 "uuid": "301bed4e-f006-4b29-aab2-94d9abbb9c0e", 00:11:07.209 "strip_size_kb": 64, 00:11:07.209 "state": "configuring", 00:11:07.209 "raid_level": "raid0", 00:11:07.209 "superblock": true, 00:11:07.209 "num_base_bdevs": 3, 00:11:07.209 "num_base_bdevs_discovered": 1, 00:11:07.209 "num_base_bdevs_operational": 3, 00:11:07.209 "base_bdevs_list": [ 00:11:07.209 { 00:11:07.209 "name": "pt1", 00:11:07.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.209 "is_configured": true, 00:11:07.209 "data_offset": 2048, 00:11:07.209 "data_size": 63488 00:11:07.209 }, 00:11:07.209 { 00:11:07.209 "name": null, 00:11:07.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.209 "is_configured": false, 00:11:07.209 "data_offset": 2048, 00:11:07.209 "data_size": 63488 00:11:07.209 }, 00:11:07.209 { 00:11:07.209 "name": null, 00:11:07.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.209 "is_configured": false, 00:11:07.209 "data_offset": 2048, 00:11:07.209 "data_size": 63488 00:11:07.209 } 00:11:07.209 ] 00:11:07.209 }' 00:11:07.209 11:22:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:07.209 11:22:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.774 11:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:11:07.774 11:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.032 [2024-07-25 11:22:23.804317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.032 [2024-07-25 11:22:23.804466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.032 [2024-07-25 11:22:23.804501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:08.032 [2024-07-25 11:22:23.804521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.032 [2024-07-25 11:22:23.805193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.032 [2024-07-25 11:22:23.805257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.032 [2024-07-25 11:22:23.805390] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:08.032 [2024-07-25 11:22:23.805438] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.032 pt2 00:11:08.032 11:22:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:08.290 [2024-07-25 11:22:24.062194] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.290 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.549 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:08.549 "name": "raid_bdev1", 00:11:08.549 "uuid": "301bed4e-f006-4b29-aab2-94d9abbb9c0e", 00:11:08.549 "strip_size_kb": 64, 00:11:08.549 "state": "configuring", 00:11:08.549 "raid_level": "raid0", 00:11:08.549 "superblock": true, 00:11:08.549 "num_base_bdevs": 3, 00:11:08.549 "num_base_bdevs_discovered": 1, 00:11:08.549 "num_base_bdevs_operational": 3, 00:11:08.549 "base_bdevs_list": [ 00:11:08.549 { 00:11:08.549 "name": "pt1", 00:11:08.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.549 "is_configured": true, 00:11:08.549 "data_offset": 2048, 00:11:08.549 "data_size": 63488 00:11:08.549 }, 00:11:08.549 { 00:11:08.549 "name": null, 00:11:08.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.549 "is_configured": false, 00:11:08.549 "data_offset": 2048, 00:11:08.549 "data_size": 63488 00:11:08.549 }, 00:11:08.549 { 00:11:08.549 "name": null, 00:11:08.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.549 "is_configured": false, 00:11:08.549 "data_offset": 2048, 00:11:08.549 "data_size": 63488 00:11:08.549 } 00:11:08.549 ] 00:11:08.549 }' 00:11:08.549 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:08.549 11:22:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.115 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:11:09.115 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:11:09.115 11:22:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.373 [2024-07-25 11:22:25.242375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.373 [2024-07-25 11:22:25.242474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.374 [2024-07-25 11:22:25.242511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:09.374 [2024-07-25 11:22:25.242526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.374 [2024-07-25 11:22:25.243129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.374 [2024-07-25 11:22:25.243162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.374 [2024-07-25 11:22:25.243271] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:09.374 [2024-07-25 11:22:25.243307] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.374 pt2 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:09.632 [2024-07-25 11:22:25.478565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:09.632 [2024-07-25 11:22:25.479026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.632 [2024-07-25 11:22:25.479092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:09.632 [2024-07-25 11:22:25.479113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.632 [2024-07-25 11:22:25.479814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.632 [2024-07-25 11:22:25.479847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:09.632 [2024-07-25 11:22:25.480005] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:09.632 [2024-07-25 11:22:25.480042] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:09.632 [2024-07-25 11:22:25.480244] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:09.632 [2024-07-25 11:22:25.480261] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:09.632 [2024-07-25 11:22:25.480622] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:09.632 [2024-07-25 11:22:25.480839] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:09.632 [2024-07-25 11:22:25.480861] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:09.632 [2024-07-25 11:22:25.481037] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.632 pt3 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.632 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.199 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:10.199 "name": "raid_bdev1", 00:11:10.199 "uuid": "301bed4e-f006-4b29-aab2-94d9abbb9c0e", 00:11:10.199 "strip_size_kb": 64, 00:11:10.199 "state": "online", 00:11:10.199 "raid_level": "raid0", 00:11:10.199 "superblock": true, 00:11:10.199 "num_base_bdevs": 3, 00:11:10.199 "num_base_bdevs_discovered": 3, 00:11:10.199 "num_base_bdevs_operational": 3, 00:11:10.199 "base_bdevs_list": [ 00:11:10.199 { 00:11:10.199 "name": "pt1", 00:11:10.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.199 "is_configured": true, 00:11:10.199 "data_offset": 2048, 00:11:10.199 "data_size": 63488 00:11:10.199 }, 00:11:10.199 { 00:11:10.199 "name": "pt2", 00:11:10.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.199 "is_configured": true, 00:11:10.199 "data_offset": 2048, 00:11:10.199 "data_size": 63488 00:11:10.199 }, 00:11:10.199 { 00:11:10.199 "name": "pt3", 00:11:10.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.199 "is_configured": true, 00:11:10.199 "data_offset": 2048, 00:11:10.199 "data_size": 63488 00:11:10.199 } 00:11:10.199 ] 00:11:10.199 }' 00:11:10.199 11:22:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:10.199 11:22:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.765 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:11:10.765 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:10.765 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:10.765 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:10.765 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:10.765 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:10.765 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:10.765 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:11.024 [2024-07-25 11:22:26.683254] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.024 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:11.024 "name": "raid_bdev1", 00:11:11.024 "aliases": [ 00:11:11.024 "301bed4e-f006-4b29-aab2-94d9abbb9c0e" 00:11:11.024 ], 00:11:11.024 "product_name": "Raid Volume", 00:11:11.024 "block_size": 512, 00:11:11.024 "num_blocks": 190464, 00:11:11.024 "uuid": "301bed4e-f006-4b29-aab2-94d9abbb9c0e", 00:11:11.024 "assigned_rate_limits": { 00:11:11.024 "rw_ios_per_sec": 0, 00:11:11.024 "rw_mbytes_per_sec": 0, 00:11:11.024 "r_mbytes_per_sec": 0, 00:11:11.024 "w_mbytes_per_sec": 0 00:11:11.024 }, 00:11:11.024 "claimed": false, 00:11:11.024 "zoned": false, 00:11:11.024 "supported_io_types": { 00:11:11.024 "read": true, 00:11:11.024 "write": true, 00:11:11.024 "unmap": true, 00:11:11.024 "flush": true, 00:11:11.024 "reset": true, 00:11:11.024 "nvme_admin": false, 00:11:11.024 "nvme_io": false, 00:11:11.024 "nvme_io_md": false, 00:11:11.024 "write_zeroes": true, 00:11:11.024 "zcopy": false, 00:11:11.024 "get_zone_info": false, 00:11:11.024 "zone_management": false, 00:11:11.024 "zone_append": false, 00:11:11.024 "compare": false, 00:11:11.024 "compare_and_write": false, 00:11:11.024 "abort": false, 00:11:11.024 "seek_hole": false, 00:11:11.024 "seek_data": false, 00:11:11.024 "copy": false, 00:11:11.024 "nvme_iov_md": false 00:11:11.024 }, 00:11:11.024 "memory_domains": [ 00:11:11.024 { 00:11:11.024 "dma_device_id": "system", 00:11:11.024 "dma_device_type": 1 00:11:11.024 }, 00:11:11.024 { 00:11:11.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.024 "dma_device_type": 2 00:11:11.024 }, 00:11:11.024 { 00:11:11.024 "dma_device_id": "system", 00:11:11.024 "dma_device_type": 1 00:11:11.024 }, 00:11:11.024 { 00:11:11.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.024 "dma_device_type": 2 00:11:11.024 }, 00:11:11.024 { 00:11:11.024 "dma_device_id": "system", 00:11:11.024 "dma_device_type": 1 00:11:11.024 }, 00:11:11.024 { 00:11:11.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.024 "dma_device_type": 2 00:11:11.024 } 00:11:11.024 ], 00:11:11.024 "driver_specific": { 00:11:11.024 "raid": { 00:11:11.024 "uuid": "301bed4e-f006-4b29-aab2-94d9abbb9c0e", 00:11:11.024 "strip_size_kb": 64, 00:11:11.024 "state": "online", 00:11:11.024 "raid_level": "raid0", 00:11:11.024 "superblock": true, 00:11:11.024 "num_base_bdevs": 3, 00:11:11.024 "num_base_bdevs_discovered": 3, 00:11:11.024 "num_base_bdevs_operational": 3, 00:11:11.024 "base_bdevs_list": [ 00:11:11.024 { 00:11:11.024 "name": "pt1", 00:11:11.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.024 "is_configured": true, 00:11:11.024 "data_offset": 2048, 00:11:11.024 "data_size": 63488 00:11:11.024 }, 00:11:11.024 { 00:11:11.024 "name": "pt2", 00:11:11.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.024 "is_configured": true, 00:11:11.024 "data_offset": 2048, 00:11:11.024 "data_size": 63488 00:11:11.024 }, 00:11:11.024 { 00:11:11.024 "name": "pt3", 00:11:11.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.024 "is_configured": true, 00:11:11.024 "data_offset": 2048, 00:11:11.024 "data_size": 63488 00:11:11.024 } 00:11:11.024 ] 00:11:11.024 } 00:11:11.024 } 00:11:11.024 }' 00:11:11.024 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.024 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:11.024 pt2 00:11:11.024 pt3' 00:11:11.024 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:11.024 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:11.024 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:11.282 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:11.282 "name": "pt1", 00:11:11.282 "aliases": [ 00:11:11.282 "00000000-0000-0000-0000-000000000001" 00:11:11.282 ], 00:11:11.282 "product_name": "passthru", 00:11:11.282 "block_size": 512, 00:11:11.282 "num_blocks": 65536, 00:11:11.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.282 "assigned_rate_limits": { 00:11:11.282 "rw_ios_per_sec": 0, 00:11:11.282 "rw_mbytes_per_sec": 0, 00:11:11.282 "r_mbytes_per_sec": 0, 00:11:11.283 "w_mbytes_per_sec": 0 00:11:11.283 }, 00:11:11.283 "claimed": true, 00:11:11.283 "claim_type": "exclusive_write", 00:11:11.283 "zoned": false, 00:11:11.283 "supported_io_types": { 00:11:11.283 "read": true, 00:11:11.283 "write": true, 00:11:11.283 "unmap": true, 00:11:11.283 "flush": true, 00:11:11.283 "reset": true, 00:11:11.283 "nvme_admin": false, 00:11:11.283 "nvme_io": false, 00:11:11.283 "nvme_io_md": false, 00:11:11.283 "write_zeroes": true, 00:11:11.283 "zcopy": true, 00:11:11.283 "get_zone_info": false, 00:11:11.283 "zone_management": false, 00:11:11.283 "zone_append": false, 00:11:11.283 "compare": false, 00:11:11.283 "compare_and_write": false, 00:11:11.283 "abort": true, 00:11:11.283 "seek_hole": false, 00:11:11.283 "seek_data": false, 00:11:11.283 "copy": true, 00:11:11.283 "nvme_iov_md": false 00:11:11.283 }, 00:11:11.283 "memory_domains": [ 00:11:11.283 { 00:11:11.283 "dma_device_id": "system", 00:11:11.283 "dma_device_type": 1 00:11:11.283 }, 00:11:11.283 { 00:11:11.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.283 "dma_device_type": 2 00:11:11.283 } 00:11:11.283 ], 00:11:11.283 "driver_specific": { 00:11:11.283 "passthru": { 00:11:11.283 "name": "pt1", 00:11:11.283 "base_bdev_name": "malloc1" 00:11:11.283 } 00:11:11.283 } 00:11:11.283 }' 00:11:11.283 11:22:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:11.283 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:11.283 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:11.283 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:11.283 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:11.541 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:11.541 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:11.541 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:11.541 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:11.541 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:11.541 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:11.541 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:11.541 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:11.541 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:11.541 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:12.107 "name": "pt2", 00:11:12.107 "aliases": [ 00:11:12.107 "00000000-0000-0000-0000-000000000002" 00:11:12.107 ], 00:11:12.107 "product_name": "passthru", 00:11:12.107 "block_size": 512, 00:11:12.107 "num_blocks": 65536, 00:11:12.107 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.107 "assigned_rate_limits": { 00:11:12.107 "rw_ios_per_sec": 0, 00:11:12.107 "rw_mbytes_per_sec": 0, 00:11:12.107 "r_mbytes_per_sec": 0, 00:11:12.107 "w_mbytes_per_sec": 0 00:11:12.107 }, 00:11:12.107 "claimed": true, 00:11:12.107 "claim_type": "exclusive_write", 00:11:12.107 "zoned": false, 00:11:12.107 "supported_io_types": { 00:11:12.107 "read": true, 00:11:12.107 "write": true, 00:11:12.107 "unmap": true, 00:11:12.107 "flush": true, 00:11:12.107 "reset": true, 00:11:12.107 "nvme_admin": false, 00:11:12.107 "nvme_io": false, 00:11:12.107 "nvme_io_md": false, 00:11:12.107 "write_zeroes": true, 00:11:12.107 "zcopy": true, 00:11:12.107 "get_zone_info": false, 00:11:12.107 "zone_management": false, 00:11:12.107 "zone_append": false, 00:11:12.107 "compare": false, 00:11:12.107 "compare_and_write": false, 00:11:12.107 "abort": true, 00:11:12.107 "seek_hole": false, 00:11:12.107 "seek_data": false, 00:11:12.107 "copy": true, 00:11:12.107 "nvme_iov_md": false 00:11:12.107 }, 00:11:12.107 "memory_domains": [ 00:11:12.107 { 00:11:12.107 "dma_device_id": "system", 00:11:12.107 "dma_device_type": 1 00:11:12.107 }, 00:11:12.107 { 00:11:12.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.107 "dma_device_type": 2 00:11:12.107 } 00:11:12.107 ], 00:11:12.107 "driver_specific": { 00:11:12.107 "passthru": { 00:11:12.107 "name": "pt2", 00:11:12.107 "base_bdev_name": "malloc2" 00:11:12.107 } 00:11:12.107 } 00:11:12.107 }' 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:12.107 11:22:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.366 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.366 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:12.366 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:12.366 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:12.366 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:12.624 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:12.624 "name": "pt3", 00:11:12.624 "aliases": [ 00:11:12.624 "00000000-0000-0000-0000-000000000003" 00:11:12.624 ], 00:11:12.624 "product_name": "passthru", 00:11:12.624 "block_size": 512, 00:11:12.624 "num_blocks": 65536, 00:11:12.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.624 "assigned_rate_limits": { 00:11:12.624 "rw_ios_per_sec": 0, 00:11:12.624 "rw_mbytes_per_sec": 0, 00:11:12.624 "r_mbytes_per_sec": 0, 00:11:12.624 "w_mbytes_per_sec": 0 00:11:12.624 }, 00:11:12.624 "claimed": true, 00:11:12.624 "claim_type": "exclusive_write", 00:11:12.624 "zoned": false, 00:11:12.624 "supported_io_types": { 00:11:12.624 "read": true, 00:11:12.624 "write": true, 00:11:12.624 "unmap": true, 00:11:12.624 "flush": true, 00:11:12.624 "reset": true, 00:11:12.624 "nvme_admin": false, 00:11:12.624 "nvme_io": false, 00:11:12.624 "nvme_io_md": false, 00:11:12.624 "write_zeroes": true, 00:11:12.624 "zcopy": true, 00:11:12.624 "get_zone_info": false, 00:11:12.624 "zone_management": false, 00:11:12.624 "zone_append": false, 00:11:12.624 "compare": false, 00:11:12.624 "compare_and_write": false, 00:11:12.624 "abort": true, 00:11:12.624 "seek_hole": false, 00:11:12.624 "seek_data": false, 00:11:12.624 "copy": true, 00:11:12.624 "nvme_iov_md": false 00:11:12.624 }, 00:11:12.624 "memory_domains": [ 00:11:12.624 { 00:11:12.624 "dma_device_id": "system", 00:11:12.624 "dma_device_type": 1 00:11:12.624 }, 00:11:12.624 { 00:11:12.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.624 "dma_device_type": 2 00:11:12.624 } 00:11:12.624 ], 00:11:12.625 "driver_specific": { 00:11:12.625 "passthru": { 00:11:12.625 "name": "pt3", 00:11:12.625 "base_bdev_name": "malloc3" 00:11:12.625 } 00:11:12.625 } 00:11:12.625 }' 00:11:12.625 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.625 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.625 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:12.625 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.883 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.883 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:12.883 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.883 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.883 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:12.883 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.883 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.883 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:12.883 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:12.883 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:11:13.141 [2024-07-25 11:22:28.967895] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.142 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 301bed4e-f006-4b29-aab2-94d9abbb9c0e '!=' 301bed4e-f006-4b29-aab2-94d9abbb9c0e ']' 00:11:13.142 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:11:13.142 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:13.142 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:13.142 11:22:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 69202 00:11:13.142 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 69202 ']' 00:11:13.142 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 69202 00:11:13.142 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:13.142 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.142 11:22:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69202 00:11:13.142 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.142 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.142 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69202' 00:11:13.142 killing process with pid 69202 00:11:13.142 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 69202 00:11:13.142 [2024-07-25 11:22:29.016592] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.142 11:22:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 69202 00:11:13.142 [2024-07-25 11:22:29.016716] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.142 [2024-07-25 11:22:29.016800] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.142 [2024-07-25 11:22:29.016815] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:13.708 [2024-07-25 11:22:29.284401] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.645 11:22:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:11:14.645 00:11:14.645 real 0m16.390s 00:11:14.645 user 0m29.041s 00:11:14.645 sys 0m2.109s 00:11:14.645 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.645 11:22:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.645 ************************************ 00:11:14.645 END TEST raid_superblock_test 00:11:14.645 ************************************ 00:11:14.645 11:22:30 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:11:14.645 11:22:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:14.645 11:22:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.645 11:22:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.645 ************************************ 00:11:14.645 START TEST raid_read_error_test 00:11:14.645 ************************************ 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:11:14.645 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.m6mkYivj53 00:11:14.904 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=69688 00:11:14.904 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 69688 /var/tmp/spdk-raid.sock 00:11:14.904 11:22:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:14.904 11:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69688 ']' 00:11:14.904 11:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:14.904 11:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:14.904 11:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:14.904 11:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.904 11:22:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.904 [2024-07-25 11:22:30.637313] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:11:14.904 [2024-07-25 11:22:30.637515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69688 ] 00:11:15.162 [2024-07-25 11:22:30.817340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.420 [2024-07-25 11:22:31.113019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.679 [2024-07-25 11:22:31.338850] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.679 [2024-07-25 11:22:31.338949] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.679 11:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.679 11:22:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:15.679 11:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:15.679 11:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:15.936 BaseBdev1_malloc 00:11:15.936 11:22:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:16.195 true 00:11:16.195 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:16.453 [2024-07-25 11:22:32.282613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:16.453 [2024-07-25 11:22:32.282755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.453 [2024-07-25 11:22:32.282796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:16.453 [2024-07-25 11:22:32.282813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.453 [2024-07-25 11:22:32.285844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.453 [2024-07-25 11:22:32.285888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:16.453 BaseBdev1 00:11:16.453 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:16.453 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:16.711 BaseBdev2_malloc 00:11:16.970 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:17.228 true 00:11:17.228 11:22:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.485 [2024-07-25 11:22:33.121770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.485 [2024-07-25 11:22:33.121873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.485 [2024-07-25 11:22:33.121934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:17.485 [2024-07-25 11:22:33.121961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.486 [2024-07-25 11:22:33.124965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.486 [2024-07-25 11:22:33.125012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.486 BaseBdev2 00:11:17.486 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:17.486 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:17.743 BaseBdev3_malloc 00:11:17.743 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:18.000 true 00:11:18.000 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:18.259 [2024-07-25 11:22:33.885766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:18.259 [2024-07-25 11:22:33.886071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.259 [2024-07-25 11:22:33.886124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:18.259 [2024-07-25 11:22:33.886142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.259 [2024-07-25 11:22:33.888979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.259 [2024-07-25 11:22:33.889022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:18.259 BaseBdev3 00:11:18.259 11:22:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:18.259 [2024-07-25 11:22:34.118030] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.259 [2024-07-25 11:22:34.120441] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.259 [2024-07-25 11:22:34.120560] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.259 [2024-07-25 11:22:34.120860] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:18.259 [2024-07-25 11:22:34.120885] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:18.259 [2024-07-25 11:22:34.121242] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:18.259 [2024-07-25 11:22:34.121494] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:18.259 [2024-07-25 11:22:34.121511] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:18.259 [2024-07-25 11:22:34.121744] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.520 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.778 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:18.778 "name": "raid_bdev1", 00:11:18.778 "uuid": "b0a471a7-e6cb-43a1-8900-a5168474423a", 00:11:18.778 "strip_size_kb": 64, 00:11:18.778 "state": "online", 00:11:18.778 "raid_level": "raid0", 00:11:18.778 "superblock": true, 00:11:18.778 "num_base_bdevs": 3, 00:11:18.778 "num_base_bdevs_discovered": 3, 00:11:18.778 "num_base_bdevs_operational": 3, 00:11:18.778 "base_bdevs_list": [ 00:11:18.778 { 00:11:18.778 "name": "BaseBdev1", 00:11:18.778 "uuid": "0fa83012-68e6-5def-b073-b7ecfe6b84d6", 00:11:18.778 "is_configured": true, 00:11:18.778 "data_offset": 2048, 00:11:18.778 "data_size": 63488 00:11:18.778 }, 00:11:18.778 { 00:11:18.778 "name": "BaseBdev2", 00:11:18.778 "uuid": "d0886577-2ee2-5ccc-b071-17f505b01a66", 00:11:18.778 "is_configured": true, 00:11:18.778 "data_offset": 2048, 00:11:18.778 "data_size": 63488 00:11:18.778 }, 00:11:18.778 { 00:11:18.778 "name": "BaseBdev3", 00:11:18.778 "uuid": "e0d8d770-b0db-5c59-af23-57a8b09fe1fe", 00:11:18.778 "is_configured": true, 00:11:18.778 "data_offset": 2048, 00:11:18.778 "data_size": 63488 00:11:18.778 } 00:11:18.778 ] 00:11:18.778 }' 00:11:18.778 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:18.778 11:22:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.343 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:11:19.343 11:22:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:19.343 [2024-07-25 11:22:35.123681] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:20.275 11:22:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.533 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.823 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:20.823 "name": "raid_bdev1", 00:11:20.823 "uuid": "b0a471a7-e6cb-43a1-8900-a5168474423a", 00:11:20.823 "strip_size_kb": 64, 00:11:20.823 "state": "online", 00:11:20.823 "raid_level": "raid0", 00:11:20.823 "superblock": true, 00:11:20.823 "num_base_bdevs": 3, 00:11:20.823 "num_base_bdevs_discovered": 3, 00:11:20.823 "num_base_bdevs_operational": 3, 00:11:20.823 "base_bdevs_list": [ 00:11:20.823 { 00:11:20.823 "name": "BaseBdev1", 00:11:20.823 "uuid": "0fa83012-68e6-5def-b073-b7ecfe6b84d6", 00:11:20.823 "is_configured": true, 00:11:20.823 "data_offset": 2048, 00:11:20.823 "data_size": 63488 00:11:20.823 }, 00:11:20.823 { 00:11:20.823 "name": "BaseBdev2", 00:11:20.823 "uuid": "d0886577-2ee2-5ccc-b071-17f505b01a66", 00:11:20.823 "is_configured": true, 00:11:20.823 "data_offset": 2048, 00:11:20.823 "data_size": 63488 00:11:20.823 }, 00:11:20.823 { 00:11:20.823 "name": "BaseBdev3", 00:11:20.823 "uuid": "e0d8d770-b0db-5c59-af23-57a8b09fe1fe", 00:11:20.823 "is_configured": true, 00:11:20.823 "data_offset": 2048, 00:11:20.823 "data_size": 63488 00:11:20.823 } 00:11:20.823 ] 00:11:20.823 }' 00:11:20.823 11:22:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:20.823 11:22:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.388 11:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:21.646 [2024-07-25 11:22:37.456661] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.646 [2024-07-25 11:22:37.456901] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.646 [2024-07-25 11:22:37.460144] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.646 [2024-07-25 11:22:37.460343] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.646 [2024-07-25 11:22:37.460516] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.646 [2024-07-25 11:22:37.460703] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:21.646 0 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 69688 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69688 ']' 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69688 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69688 00:11:21.646 killing process with pid 69688 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69688' 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69688 00:11:21.646 [2024-07-25 11:22:37.515840] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.646 11:22:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69688 00:11:21.904 [2024-07-25 11:22:37.717918] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.278 11:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.m6mkYivj53 00:11:23.278 11:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:11:23.278 11:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:11:23.278 11:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.43 00:11:23.278 11:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:11:23.278 11:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:23.278 11:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:23.278 11:22:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.43 != \0\.\0\0 ]] 00:11:23.278 00:11:23.278 real 0m8.412s 00:11:23.278 user 0m12.746s 00:11:23.278 sys 0m1.068s 00:11:23.278 ************************************ 00:11:23.278 END TEST raid_read_error_test 00:11:23.278 ************************************ 00:11:23.278 11:22:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.278 11:22:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.278 11:22:38 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:11:23.278 11:22:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:23.278 11:22:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.278 11:22:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.278 ************************************ 00:11:23.278 START TEST raid_write_error_test 00:11:23.278 ************************************ 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.3vE9MUl7wp 00:11:23.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=69884 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 69884 /var/tmp/spdk-raid.sock 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69884 ']' 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.278 11:22:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.278 [2024-07-25 11:22:39.091566] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:11:23.278 [2024-07-25 11:22:39.091772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69884 ] 00:11:23.536 [2024-07-25 11:22:39.267688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.794 [2024-07-25 11:22:39.523944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.052 [2024-07-25 11:22:39.725534] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.052 [2024-07-25 11:22:39.725577] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.310 11:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:24.310 11:22:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:24.310 11:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:24.310 11:22:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:24.567 BaseBdev1_malloc 00:11:24.567 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:24.826 true 00:11:24.826 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:25.084 [2024-07-25 11:22:40.755927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:25.084 [2024-07-25 11:22:40.756024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.084 [2024-07-25 11:22:40.756058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:25.084 [2024-07-25 11:22:40.756073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.084 [2024-07-25 11:22:40.758842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.084 [2024-07-25 11:22:40.758886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:25.084 BaseBdev1 00:11:25.084 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:25.084 11:22:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:25.341 BaseBdev2_malloc 00:11:25.342 11:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:25.599 true 00:11:25.599 11:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:25.857 [2024-07-25 11:22:41.566484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:25.857 [2024-07-25 11:22:41.566561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.857 [2024-07-25 11:22:41.566598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:25.857 [2024-07-25 11:22:41.566613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.857 [2024-07-25 11:22:41.569415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.857 [2024-07-25 11:22:41.569460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:25.858 BaseBdev2 00:11:25.858 11:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:25.858 11:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:26.115 BaseBdev3_malloc 00:11:26.115 11:22:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:26.373 true 00:11:26.373 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:26.631 [2024-07-25 11:22:42.346644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:26.631 [2024-07-25 11:22:42.346755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.631 [2024-07-25 11:22:42.346797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:26.631 [2024-07-25 11:22:42.346813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.631 [2024-07-25 11:22:42.349796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.631 [2024-07-25 11:22:42.349841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:26.631 BaseBdev3 00:11:26.631 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:26.889 [2024-07-25 11:22:42.586918] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.889 [2024-07-25 11:22:42.589655] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.889 [2024-07-25 11:22:42.589921] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.889 [2024-07-25 11:22:42.590338] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:26.889 [2024-07-25 11:22:42.590481] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:26.889 [2024-07-25 11:22:42.590926] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:26.889 [2024-07-25 11:22:42.591306] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:26.889 [2024-07-25 11:22:42.591436] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:26.889 [2024-07-25 11:22:42.591846] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.889 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.147 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.147 "name": "raid_bdev1", 00:11:27.147 "uuid": "485f1dc9-3fce-4156-8da4-f62c813cebb4", 00:11:27.147 "strip_size_kb": 64, 00:11:27.147 "state": "online", 00:11:27.147 "raid_level": "raid0", 00:11:27.147 "superblock": true, 00:11:27.147 "num_base_bdevs": 3, 00:11:27.147 "num_base_bdevs_discovered": 3, 00:11:27.147 "num_base_bdevs_operational": 3, 00:11:27.147 "base_bdevs_list": [ 00:11:27.147 { 00:11:27.147 "name": "BaseBdev1", 00:11:27.147 "uuid": "281812fc-2b35-5f3f-b989-5c1cc82572ce", 00:11:27.147 "is_configured": true, 00:11:27.147 "data_offset": 2048, 00:11:27.147 "data_size": 63488 00:11:27.147 }, 00:11:27.147 { 00:11:27.147 "name": "BaseBdev2", 00:11:27.147 "uuid": "5668fb7a-9e93-52ea-a806-365cb1ae913d", 00:11:27.147 "is_configured": true, 00:11:27.147 "data_offset": 2048, 00:11:27.147 "data_size": 63488 00:11:27.147 }, 00:11:27.147 { 00:11:27.147 "name": "BaseBdev3", 00:11:27.147 "uuid": "cfba30cc-74f4-5fa4-a036-030b9c56f4b4", 00:11:27.147 "is_configured": true, 00:11:27.147 "data_offset": 2048, 00:11:27.147 "data_size": 63488 00:11:27.147 } 00:11:27.147 ] 00:11:27.147 }' 00:11:27.147 11:22:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.147 11:22:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.713 11:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:11:27.713 11:22:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:27.970 [2024-07-25 11:22:43.673514] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:28.902 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.161 11:22:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.419 11:22:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.419 "name": "raid_bdev1", 00:11:29.419 "uuid": "485f1dc9-3fce-4156-8da4-f62c813cebb4", 00:11:29.419 "strip_size_kb": 64, 00:11:29.419 "state": "online", 00:11:29.419 "raid_level": "raid0", 00:11:29.419 "superblock": true, 00:11:29.419 "num_base_bdevs": 3, 00:11:29.419 "num_base_bdevs_discovered": 3, 00:11:29.419 "num_base_bdevs_operational": 3, 00:11:29.419 "base_bdevs_list": [ 00:11:29.419 { 00:11:29.419 "name": "BaseBdev1", 00:11:29.419 "uuid": "281812fc-2b35-5f3f-b989-5c1cc82572ce", 00:11:29.419 "is_configured": true, 00:11:29.419 "data_offset": 2048, 00:11:29.419 "data_size": 63488 00:11:29.419 }, 00:11:29.419 { 00:11:29.419 "name": "BaseBdev2", 00:11:29.419 "uuid": "5668fb7a-9e93-52ea-a806-365cb1ae913d", 00:11:29.419 "is_configured": true, 00:11:29.419 "data_offset": 2048, 00:11:29.419 "data_size": 63488 00:11:29.419 }, 00:11:29.419 { 00:11:29.419 "name": "BaseBdev3", 00:11:29.419 "uuid": "cfba30cc-74f4-5fa4-a036-030b9c56f4b4", 00:11:29.419 "is_configured": true, 00:11:29.419 "data_offset": 2048, 00:11:29.419 "data_size": 63488 00:11:29.419 } 00:11:29.419 ] 00:11:29.419 }' 00:11:29.419 11:22:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.419 11:22:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.985 11:22:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:30.242 [2024-07-25 11:22:46.047184] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:30.242 [2024-07-25 11:22:46.047433] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.242 [2024-07-25 11:22:46.050709] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.242 0 00:11:30.242 [2024-07-25 11:22:46.050909] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.242 [2024-07-25 11:22:46.050973] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.242 [2024-07-25 11:22:46.050993] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:30.242 11:22:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 69884 00:11:30.242 11:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69884 ']' 00:11:30.242 11:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69884 00:11:30.242 11:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:30.242 11:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:30.242 11:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69884 00:11:30.242 killing process with pid 69884 00:11:30.242 11:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:30.242 11:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:30.242 11:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69884' 00:11:30.242 11:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69884 00:11:30.243 11:22:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69884 00:11:30.243 [2024-07-25 11:22:46.093489] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.500 [2024-07-25 11:22:46.302122] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.875 11:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.3vE9MUl7wp 00:11:31.875 11:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:11:31.875 11:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:11:31.875 ************************************ 00:11:31.875 END TEST raid_write_error_test 00:11:31.875 ************************************ 00:11:31.875 11:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.42 00:11:31.875 11:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:11:31.875 11:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:31.875 11:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:31.875 11:22:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.42 != \0\.\0\0 ]] 00:11:31.875 00:11:31.875 real 0m8.553s 00:11:31.875 user 0m13.000s 00:11:31.875 sys 0m1.023s 00:11:31.875 11:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.875 11:22:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.875 11:22:47 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:11:31.875 11:22:47 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:31.875 11:22:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:31.875 11:22:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.875 11:22:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.875 ************************************ 00:11:31.875 START TEST raid_state_function_test 00:11:31.875 ************************************ 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=70074 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:31.875 Process raid pid: 70074 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 70074' 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 70074 /var/tmp/spdk-raid.sock 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 70074 ']' 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:31.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.875 11:22:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.875 [2024-07-25 11:22:47.690108] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:11:31.875 [2024-07-25 11:22:47.690253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.133 [2024-07-25 11:22:47.855484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.392 [2024-07-25 11:22:48.121448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.649 [2024-07-25 11:22:48.326651] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.649 [2024-07-25 11:22:48.326703] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.908 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.908 11:22:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:32.908 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:33.166 [2024-07-25 11:22:48.819644] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.166 [2024-07-25 11:22:48.819709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.166 [2024-07-25 11:22:48.819729] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.166 [2024-07-25 11:22:48.819742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.166 [2024-07-25 11:22:48.819757] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.166 [2024-07-25 11:22:48.819768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.166 11:22:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.424 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:33.424 "name": "Existed_Raid", 00:11:33.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.424 "strip_size_kb": 64, 00:11:33.424 "state": "configuring", 00:11:33.424 "raid_level": "concat", 00:11:33.424 "superblock": false, 00:11:33.424 "num_base_bdevs": 3, 00:11:33.424 "num_base_bdevs_discovered": 0, 00:11:33.424 "num_base_bdevs_operational": 3, 00:11:33.424 "base_bdevs_list": [ 00:11:33.424 { 00:11:33.424 "name": "BaseBdev1", 00:11:33.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.424 "is_configured": false, 00:11:33.424 "data_offset": 0, 00:11:33.424 "data_size": 0 00:11:33.424 }, 00:11:33.424 { 00:11:33.424 "name": "BaseBdev2", 00:11:33.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.424 "is_configured": false, 00:11:33.424 "data_offset": 0, 00:11:33.424 "data_size": 0 00:11:33.424 }, 00:11:33.424 { 00:11:33.424 "name": "BaseBdev3", 00:11:33.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.424 "is_configured": false, 00:11:33.424 "data_offset": 0, 00:11:33.424 "data_size": 0 00:11:33.424 } 00:11:33.424 ] 00:11:33.424 }' 00:11:33.424 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:33.424 11:22:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.071 11:22:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:34.329 [2024-07-25 11:22:49.995788] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.329 [2024-07-25 11:22:49.995832] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:34.329 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:34.587 [2024-07-25 11:22:50.219868] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.587 [2024-07-25 11:22:50.219936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.587 [2024-07-25 11:22:50.219968] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.587 [2024-07-25 11:22:50.219983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.587 [2024-07-25 11:22:50.219996] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.587 [2024-07-25 11:22:50.220012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.587 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.845 [2024-07-25 11:22:50.480114] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.845 BaseBdev1 00:11:34.845 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:34.845 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:34.845 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.845 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:34.845 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.845 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.845 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:34.845 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:35.104 [ 00:11:35.104 { 00:11:35.104 "name": "BaseBdev1", 00:11:35.104 "aliases": [ 00:11:35.104 "2968b05f-fce9-4da1-93a9-774f1cc47bdd" 00:11:35.104 ], 00:11:35.104 "product_name": "Malloc disk", 00:11:35.104 "block_size": 512, 00:11:35.104 "num_blocks": 65536, 00:11:35.104 "uuid": "2968b05f-fce9-4da1-93a9-774f1cc47bdd", 00:11:35.104 "assigned_rate_limits": { 00:11:35.104 "rw_ios_per_sec": 0, 00:11:35.104 "rw_mbytes_per_sec": 0, 00:11:35.104 "r_mbytes_per_sec": 0, 00:11:35.104 "w_mbytes_per_sec": 0 00:11:35.104 }, 00:11:35.104 "claimed": true, 00:11:35.104 "claim_type": "exclusive_write", 00:11:35.104 "zoned": false, 00:11:35.104 "supported_io_types": { 00:11:35.104 "read": true, 00:11:35.104 "write": true, 00:11:35.104 "unmap": true, 00:11:35.104 "flush": true, 00:11:35.104 "reset": true, 00:11:35.104 "nvme_admin": false, 00:11:35.104 "nvme_io": false, 00:11:35.104 "nvme_io_md": false, 00:11:35.104 "write_zeroes": true, 00:11:35.104 "zcopy": true, 00:11:35.104 "get_zone_info": false, 00:11:35.104 "zone_management": false, 00:11:35.104 "zone_append": false, 00:11:35.104 "compare": false, 00:11:35.104 "compare_and_write": false, 00:11:35.104 "abort": true, 00:11:35.104 "seek_hole": false, 00:11:35.104 "seek_data": false, 00:11:35.104 "copy": true, 00:11:35.104 "nvme_iov_md": false 00:11:35.104 }, 00:11:35.104 "memory_domains": [ 00:11:35.104 { 00:11:35.104 "dma_device_id": "system", 00:11:35.104 "dma_device_type": 1 00:11:35.104 }, 00:11:35.104 { 00:11:35.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.104 "dma_device_type": 2 00:11:35.104 } 00:11:35.104 ], 00:11:35.104 "driver_specific": {} 00:11:35.104 } 00:11:35.104 ] 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.104 11:22:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.362 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:35.362 "name": "Existed_Raid", 00:11:35.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.362 "strip_size_kb": 64, 00:11:35.362 "state": "configuring", 00:11:35.362 "raid_level": "concat", 00:11:35.362 "superblock": false, 00:11:35.362 "num_base_bdevs": 3, 00:11:35.362 "num_base_bdevs_discovered": 1, 00:11:35.362 "num_base_bdevs_operational": 3, 00:11:35.362 "base_bdevs_list": [ 00:11:35.362 { 00:11:35.362 "name": "BaseBdev1", 00:11:35.362 "uuid": "2968b05f-fce9-4da1-93a9-774f1cc47bdd", 00:11:35.362 "is_configured": true, 00:11:35.362 "data_offset": 0, 00:11:35.362 "data_size": 65536 00:11:35.362 }, 00:11:35.362 { 00:11:35.362 "name": "BaseBdev2", 00:11:35.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.362 "is_configured": false, 00:11:35.362 "data_offset": 0, 00:11:35.362 "data_size": 0 00:11:35.362 }, 00:11:35.362 { 00:11:35.362 "name": "BaseBdev3", 00:11:35.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.362 "is_configured": false, 00:11:35.362 "data_offset": 0, 00:11:35.362 "data_size": 0 00:11:35.362 } 00:11:35.362 ] 00:11:35.362 }' 00:11:35.362 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:35.362 11:22:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.297 11:22:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:36.297 [2024-07-25 11:22:52.104638] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.297 [2024-07-25 11:22:52.104711] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:36.297 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:36.555 [2024-07-25 11:22:52.428754] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.555 [2024-07-25 11:22:52.431121] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.555 [2024-07-25 11:22:52.431173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.555 [2024-07-25 11:22:52.431194] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.555 [2024-07-25 11:22:52.431208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.814 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.073 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:37.073 "name": "Existed_Raid", 00:11:37.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.073 "strip_size_kb": 64, 00:11:37.073 "state": "configuring", 00:11:37.073 "raid_level": "concat", 00:11:37.073 "superblock": false, 00:11:37.073 "num_base_bdevs": 3, 00:11:37.073 "num_base_bdevs_discovered": 1, 00:11:37.073 "num_base_bdevs_operational": 3, 00:11:37.073 "base_bdevs_list": [ 00:11:37.073 { 00:11:37.073 "name": "BaseBdev1", 00:11:37.073 "uuid": "2968b05f-fce9-4da1-93a9-774f1cc47bdd", 00:11:37.073 "is_configured": true, 00:11:37.073 "data_offset": 0, 00:11:37.073 "data_size": 65536 00:11:37.073 }, 00:11:37.073 { 00:11:37.073 "name": "BaseBdev2", 00:11:37.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.073 "is_configured": false, 00:11:37.073 "data_offset": 0, 00:11:37.073 "data_size": 0 00:11:37.073 }, 00:11:37.073 { 00:11:37.073 "name": "BaseBdev3", 00:11:37.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.073 "is_configured": false, 00:11:37.073 "data_offset": 0, 00:11:37.073 "data_size": 0 00:11:37.073 } 00:11:37.073 ] 00:11:37.073 }' 00:11:37.073 11:22:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:37.073 11:22:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.642 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.902 [2024-07-25 11:22:53.669818] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.902 BaseBdev2 00:11:37.902 11:22:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:37.902 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:37.902 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.902 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:37.902 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.902 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.902 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.160 11:22:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.418 [ 00:11:38.418 { 00:11:38.418 "name": "BaseBdev2", 00:11:38.418 "aliases": [ 00:11:38.418 "d1e26747-7b6f-483d-8932-5aaf3d4bafb8" 00:11:38.418 ], 00:11:38.418 "product_name": "Malloc disk", 00:11:38.418 "block_size": 512, 00:11:38.418 "num_blocks": 65536, 00:11:38.418 "uuid": "d1e26747-7b6f-483d-8932-5aaf3d4bafb8", 00:11:38.418 "assigned_rate_limits": { 00:11:38.418 "rw_ios_per_sec": 0, 00:11:38.418 "rw_mbytes_per_sec": 0, 00:11:38.418 "r_mbytes_per_sec": 0, 00:11:38.418 "w_mbytes_per_sec": 0 00:11:38.418 }, 00:11:38.418 "claimed": true, 00:11:38.418 "claim_type": "exclusive_write", 00:11:38.418 "zoned": false, 00:11:38.418 "supported_io_types": { 00:11:38.418 "read": true, 00:11:38.418 "write": true, 00:11:38.418 "unmap": true, 00:11:38.418 "flush": true, 00:11:38.418 "reset": true, 00:11:38.418 "nvme_admin": false, 00:11:38.418 "nvme_io": false, 00:11:38.418 "nvme_io_md": false, 00:11:38.418 "write_zeroes": true, 00:11:38.418 "zcopy": true, 00:11:38.418 "get_zone_info": false, 00:11:38.418 "zone_management": false, 00:11:38.418 "zone_append": false, 00:11:38.418 "compare": false, 00:11:38.418 "compare_and_write": false, 00:11:38.418 "abort": true, 00:11:38.418 "seek_hole": false, 00:11:38.418 "seek_data": false, 00:11:38.418 "copy": true, 00:11:38.418 "nvme_iov_md": false 00:11:38.418 }, 00:11:38.418 "memory_domains": [ 00:11:38.418 { 00:11:38.418 "dma_device_id": "system", 00:11:38.418 "dma_device_type": 1 00:11:38.418 }, 00:11:38.418 { 00:11:38.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.418 "dma_device_type": 2 00:11:38.418 } 00:11:38.418 ], 00:11:38.418 "driver_specific": {} 00:11:38.418 } 00:11:38.418 ] 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:38.418 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:38.419 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:38.419 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.419 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.677 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:38.677 "name": "Existed_Raid", 00:11:38.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.677 "strip_size_kb": 64, 00:11:38.677 "state": "configuring", 00:11:38.677 "raid_level": "concat", 00:11:38.677 "superblock": false, 00:11:38.677 "num_base_bdevs": 3, 00:11:38.677 "num_base_bdevs_discovered": 2, 00:11:38.677 "num_base_bdevs_operational": 3, 00:11:38.677 "base_bdevs_list": [ 00:11:38.677 { 00:11:38.677 "name": "BaseBdev1", 00:11:38.677 "uuid": "2968b05f-fce9-4da1-93a9-774f1cc47bdd", 00:11:38.677 "is_configured": true, 00:11:38.677 "data_offset": 0, 00:11:38.677 "data_size": 65536 00:11:38.677 }, 00:11:38.677 { 00:11:38.677 "name": "BaseBdev2", 00:11:38.677 "uuid": "d1e26747-7b6f-483d-8932-5aaf3d4bafb8", 00:11:38.677 "is_configured": true, 00:11:38.677 "data_offset": 0, 00:11:38.677 "data_size": 65536 00:11:38.677 }, 00:11:38.677 { 00:11:38.677 "name": "BaseBdev3", 00:11:38.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.677 "is_configured": false, 00:11:38.677 "data_offset": 0, 00:11:38.677 "data_size": 0 00:11:38.677 } 00:11:38.677 ] 00:11:38.677 }' 00:11:38.677 11:22:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:38.677 11:22:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.614 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:39.614 [2024-07-25 11:22:55.440703] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.614 [2024-07-25 11:22:55.440974] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:39.614 [2024-07-25 11:22:55.441115] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:39.614 [2024-07-25 11:22:55.441525] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:39.614 [2024-07-25 11:22:55.441888] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:39.614 [2024-07-25 11:22:55.442026] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:39.614 [2024-07-25 11:22:55.442500] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.614 BaseBdev3 00:11:39.614 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:39.614 11:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:39.614 11:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.614 11:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:39.614 11:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.614 11:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.614 11:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:39.873 11:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:40.132 [ 00:11:40.132 { 00:11:40.132 "name": "BaseBdev3", 00:11:40.132 "aliases": [ 00:11:40.132 "30132351-b68e-4de8-8551-7caa3f050eca" 00:11:40.132 ], 00:11:40.132 "product_name": "Malloc disk", 00:11:40.132 "block_size": 512, 00:11:40.132 "num_blocks": 65536, 00:11:40.132 "uuid": "30132351-b68e-4de8-8551-7caa3f050eca", 00:11:40.132 "assigned_rate_limits": { 00:11:40.132 "rw_ios_per_sec": 0, 00:11:40.132 "rw_mbytes_per_sec": 0, 00:11:40.132 "r_mbytes_per_sec": 0, 00:11:40.132 "w_mbytes_per_sec": 0 00:11:40.132 }, 00:11:40.132 "claimed": true, 00:11:40.132 "claim_type": "exclusive_write", 00:11:40.132 "zoned": false, 00:11:40.132 "supported_io_types": { 00:11:40.132 "read": true, 00:11:40.132 "write": true, 00:11:40.132 "unmap": true, 00:11:40.132 "flush": true, 00:11:40.132 "reset": true, 00:11:40.132 "nvme_admin": false, 00:11:40.132 "nvme_io": false, 00:11:40.132 "nvme_io_md": false, 00:11:40.132 "write_zeroes": true, 00:11:40.132 "zcopy": true, 00:11:40.132 "get_zone_info": false, 00:11:40.132 "zone_management": false, 00:11:40.132 "zone_append": false, 00:11:40.132 "compare": false, 00:11:40.132 "compare_and_write": false, 00:11:40.132 "abort": true, 00:11:40.132 "seek_hole": false, 00:11:40.132 "seek_data": false, 00:11:40.132 "copy": true, 00:11:40.132 "nvme_iov_md": false 00:11:40.132 }, 00:11:40.132 "memory_domains": [ 00:11:40.132 { 00:11:40.132 "dma_device_id": "system", 00:11:40.132 "dma_device_type": 1 00:11:40.132 }, 00:11:40.132 { 00:11:40.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.132 "dma_device_type": 2 00:11:40.132 } 00:11:40.132 ], 00:11:40.132 "driver_specific": {} 00:11:40.132 } 00:11:40.132 ] 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.132 11:22:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.391 11:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:40.391 "name": "Existed_Raid", 00:11:40.391 "uuid": "cb4b4de4-2dd0-41e8-962c-fa5fe5d728ab", 00:11:40.391 "strip_size_kb": 64, 00:11:40.391 "state": "online", 00:11:40.391 "raid_level": "concat", 00:11:40.391 "superblock": false, 00:11:40.391 "num_base_bdevs": 3, 00:11:40.391 "num_base_bdevs_discovered": 3, 00:11:40.391 "num_base_bdevs_operational": 3, 00:11:40.391 "base_bdevs_list": [ 00:11:40.391 { 00:11:40.391 "name": "BaseBdev1", 00:11:40.391 "uuid": "2968b05f-fce9-4da1-93a9-774f1cc47bdd", 00:11:40.391 "is_configured": true, 00:11:40.391 "data_offset": 0, 00:11:40.391 "data_size": 65536 00:11:40.391 }, 00:11:40.391 { 00:11:40.391 "name": "BaseBdev2", 00:11:40.391 "uuid": "d1e26747-7b6f-483d-8932-5aaf3d4bafb8", 00:11:40.391 "is_configured": true, 00:11:40.391 "data_offset": 0, 00:11:40.391 "data_size": 65536 00:11:40.391 }, 00:11:40.391 { 00:11:40.391 "name": "BaseBdev3", 00:11:40.391 "uuid": "30132351-b68e-4de8-8551-7caa3f050eca", 00:11:40.391 "is_configured": true, 00:11:40.391 "data_offset": 0, 00:11:40.391 "data_size": 65536 00:11:40.391 } 00:11:40.391 ] 00:11:40.391 }' 00:11:40.391 11:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:40.391 11:22:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.328 11:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.328 11:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:41.328 11:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:41.328 11:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:41.328 11:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:41.328 11:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:41.329 11:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:41.329 11:22:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:41.329 [2024-07-25 11:22:57.153609] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.329 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:41.329 "name": "Existed_Raid", 00:11:41.329 "aliases": [ 00:11:41.329 "cb4b4de4-2dd0-41e8-962c-fa5fe5d728ab" 00:11:41.329 ], 00:11:41.329 "product_name": "Raid Volume", 00:11:41.329 "block_size": 512, 00:11:41.329 "num_blocks": 196608, 00:11:41.329 "uuid": "cb4b4de4-2dd0-41e8-962c-fa5fe5d728ab", 00:11:41.329 "assigned_rate_limits": { 00:11:41.329 "rw_ios_per_sec": 0, 00:11:41.329 "rw_mbytes_per_sec": 0, 00:11:41.329 "r_mbytes_per_sec": 0, 00:11:41.329 "w_mbytes_per_sec": 0 00:11:41.329 }, 00:11:41.329 "claimed": false, 00:11:41.329 "zoned": false, 00:11:41.329 "supported_io_types": { 00:11:41.329 "read": true, 00:11:41.329 "write": true, 00:11:41.329 "unmap": true, 00:11:41.329 "flush": true, 00:11:41.329 "reset": true, 00:11:41.329 "nvme_admin": false, 00:11:41.329 "nvme_io": false, 00:11:41.329 "nvme_io_md": false, 00:11:41.329 "write_zeroes": true, 00:11:41.329 "zcopy": false, 00:11:41.329 "get_zone_info": false, 00:11:41.329 "zone_management": false, 00:11:41.329 "zone_append": false, 00:11:41.329 "compare": false, 00:11:41.329 "compare_and_write": false, 00:11:41.329 "abort": false, 00:11:41.329 "seek_hole": false, 00:11:41.329 "seek_data": false, 00:11:41.329 "copy": false, 00:11:41.329 "nvme_iov_md": false 00:11:41.329 }, 00:11:41.329 "memory_domains": [ 00:11:41.329 { 00:11:41.329 "dma_device_id": "system", 00:11:41.329 "dma_device_type": 1 00:11:41.329 }, 00:11:41.329 { 00:11:41.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.329 "dma_device_type": 2 00:11:41.329 }, 00:11:41.329 { 00:11:41.329 "dma_device_id": "system", 00:11:41.329 "dma_device_type": 1 00:11:41.329 }, 00:11:41.329 { 00:11:41.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.329 "dma_device_type": 2 00:11:41.329 }, 00:11:41.329 { 00:11:41.329 "dma_device_id": "system", 00:11:41.329 "dma_device_type": 1 00:11:41.329 }, 00:11:41.329 { 00:11:41.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.329 "dma_device_type": 2 00:11:41.329 } 00:11:41.329 ], 00:11:41.329 "driver_specific": { 00:11:41.329 "raid": { 00:11:41.329 "uuid": "cb4b4de4-2dd0-41e8-962c-fa5fe5d728ab", 00:11:41.329 "strip_size_kb": 64, 00:11:41.329 "state": "online", 00:11:41.329 "raid_level": "concat", 00:11:41.329 "superblock": false, 00:11:41.329 "num_base_bdevs": 3, 00:11:41.329 "num_base_bdevs_discovered": 3, 00:11:41.329 "num_base_bdevs_operational": 3, 00:11:41.329 "base_bdevs_list": [ 00:11:41.329 { 00:11:41.329 "name": "BaseBdev1", 00:11:41.329 "uuid": "2968b05f-fce9-4da1-93a9-774f1cc47bdd", 00:11:41.329 "is_configured": true, 00:11:41.329 "data_offset": 0, 00:11:41.329 "data_size": 65536 00:11:41.329 }, 00:11:41.329 { 00:11:41.329 "name": "BaseBdev2", 00:11:41.329 "uuid": "d1e26747-7b6f-483d-8932-5aaf3d4bafb8", 00:11:41.329 "is_configured": true, 00:11:41.329 "data_offset": 0, 00:11:41.329 "data_size": 65536 00:11:41.329 }, 00:11:41.329 { 00:11:41.329 "name": "BaseBdev3", 00:11:41.329 "uuid": "30132351-b68e-4de8-8551-7caa3f050eca", 00:11:41.329 "is_configured": true, 00:11:41.329 "data_offset": 0, 00:11:41.329 "data_size": 65536 00:11:41.329 } 00:11:41.329 ] 00:11:41.329 } 00:11:41.329 } 00:11:41.329 }' 00:11:41.329 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.589 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:41.589 BaseBdev2 00:11:41.589 BaseBdev3' 00:11:41.589 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:41.589 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:41.589 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:41.848 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:41.848 "name": "BaseBdev1", 00:11:41.848 "aliases": [ 00:11:41.848 "2968b05f-fce9-4da1-93a9-774f1cc47bdd" 00:11:41.848 ], 00:11:41.848 "product_name": "Malloc disk", 00:11:41.848 "block_size": 512, 00:11:41.848 "num_blocks": 65536, 00:11:41.848 "uuid": "2968b05f-fce9-4da1-93a9-774f1cc47bdd", 00:11:41.848 "assigned_rate_limits": { 00:11:41.848 "rw_ios_per_sec": 0, 00:11:41.848 "rw_mbytes_per_sec": 0, 00:11:41.848 "r_mbytes_per_sec": 0, 00:11:41.848 "w_mbytes_per_sec": 0 00:11:41.848 }, 00:11:41.848 "claimed": true, 00:11:41.848 "claim_type": "exclusive_write", 00:11:41.848 "zoned": false, 00:11:41.848 "supported_io_types": { 00:11:41.848 "read": true, 00:11:41.848 "write": true, 00:11:41.848 "unmap": true, 00:11:41.848 "flush": true, 00:11:41.848 "reset": true, 00:11:41.848 "nvme_admin": false, 00:11:41.848 "nvme_io": false, 00:11:41.848 "nvme_io_md": false, 00:11:41.848 "write_zeroes": true, 00:11:41.848 "zcopy": true, 00:11:41.848 "get_zone_info": false, 00:11:41.848 "zone_management": false, 00:11:41.848 "zone_append": false, 00:11:41.848 "compare": false, 00:11:41.848 "compare_and_write": false, 00:11:41.848 "abort": true, 00:11:41.848 "seek_hole": false, 00:11:41.848 "seek_data": false, 00:11:41.848 "copy": true, 00:11:41.848 "nvme_iov_md": false 00:11:41.848 }, 00:11:41.848 "memory_domains": [ 00:11:41.848 { 00:11:41.848 "dma_device_id": "system", 00:11:41.848 "dma_device_type": 1 00:11:41.848 }, 00:11:41.848 { 00:11:41.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.848 "dma_device_type": 2 00:11:41.848 } 00:11:41.848 ], 00:11:41.848 "driver_specific": {} 00:11:41.848 }' 00:11:41.848 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:41.848 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:41.848 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:41.849 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.849 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.849 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:41.849 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:42.107 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:42.107 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:42.107 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:42.107 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:42.107 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:42.107 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:42.107 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:42.107 11:22:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:42.366 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:42.366 "name": "BaseBdev2", 00:11:42.366 "aliases": [ 00:11:42.366 "d1e26747-7b6f-483d-8932-5aaf3d4bafb8" 00:11:42.366 ], 00:11:42.366 "product_name": "Malloc disk", 00:11:42.366 "block_size": 512, 00:11:42.366 "num_blocks": 65536, 00:11:42.366 "uuid": "d1e26747-7b6f-483d-8932-5aaf3d4bafb8", 00:11:42.366 "assigned_rate_limits": { 00:11:42.366 "rw_ios_per_sec": 0, 00:11:42.366 "rw_mbytes_per_sec": 0, 00:11:42.366 "r_mbytes_per_sec": 0, 00:11:42.366 "w_mbytes_per_sec": 0 00:11:42.366 }, 00:11:42.366 "claimed": true, 00:11:42.366 "claim_type": "exclusive_write", 00:11:42.366 "zoned": false, 00:11:42.366 "supported_io_types": { 00:11:42.366 "read": true, 00:11:42.366 "write": true, 00:11:42.366 "unmap": true, 00:11:42.366 "flush": true, 00:11:42.366 "reset": true, 00:11:42.366 "nvme_admin": false, 00:11:42.366 "nvme_io": false, 00:11:42.366 "nvme_io_md": false, 00:11:42.366 "write_zeroes": true, 00:11:42.366 "zcopy": true, 00:11:42.366 "get_zone_info": false, 00:11:42.366 "zone_management": false, 00:11:42.366 "zone_append": false, 00:11:42.366 "compare": false, 00:11:42.366 "compare_and_write": false, 00:11:42.366 "abort": true, 00:11:42.366 "seek_hole": false, 00:11:42.366 "seek_data": false, 00:11:42.366 "copy": true, 00:11:42.366 "nvme_iov_md": false 00:11:42.366 }, 00:11:42.366 "memory_domains": [ 00:11:42.366 { 00:11:42.366 "dma_device_id": "system", 00:11:42.366 "dma_device_type": 1 00:11:42.366 }, 00:11:42.366 { 00:11:42.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.366 "dma_device_type": 2 00:11:42.366 } 00:11:42.366 ], 00:11:42.366 "driver_specific": {} 00:11:42.366 }' 00:11:42.366 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:42.366 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:42.625 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:42.625 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:42.625 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:42.625 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:42.625 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:42.625 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:42.625 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:42.625 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:42.885 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:42.885 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:42.885 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:42.885 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:42.885 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:43.144 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:43.144 "name": "BaseBdev3", 00:11:43.144 "aliases": [ 00:11:43.144 "30132351-b68e-4de8-8551-7caa3f050eca" 00:11:43.144 ], 00:11:43.144 "product_name": "Malloc disk", 00:11:43.144 "block_size": 512, 00:11:43.144 "num_blocks": 65536, 00:11:43.144 "uuid": "30132351-b68e-4de8-8551-7caa3f050eca", 00:11:43.144 "assigned_rate_limits": { 00:11:43.144 "rw_ios_per_sec": 0, 00:11:43.144 "rw_mbytes_per_sec": 0, 00:11:43.144 "r_mbytes_per_sec": 0, 00:11:43.144 "w_mbytes_per_sec": 0 00:11:43.144 }, 00:11:43.144 "claimed": true, 00:11:43.144 "claim_type": "exclusive_write", 00:11:43.144 "zoned": false, 00:11:43.144 "supported_io_types": { 00:11:43.144 "read": true, 00:11:43.144 "write": true, 00:11:43.144 "unmap": true, 00:11:43.144 "flush": true, 00:11:43.144 "reset": true, 00:11:43.144 "nvme_admin": false, 00:11:43.144 "nvme_io": false, 00:11:43.144 "nvme_io_md": false, 00:11:43.144 "write_zeroes": true, 00:11:43.144 "zcopy": true, 00:11:43.144 "get_zone_info": false, 00:11:43.144 "zone_management": false, 00:11:43.144 "zone_append": false, 00:11:43.144 "compare": false, 00:11:43.144 "compare_and_write": false, 00:11:43.144 "abort": true, 00:11:43.144 "seek_hole": false, 00:11:43.144 "seek_data": false, 00:11:43.144 "copy": true, 00:11:43.144 "nvme_iov_md": false 00:11:43.144 }, 00:11:43.144 "memory_domains": [ 00:11:43.144 { 00:11:43.144 "dma_device_id": "system", 00:11:43.144 "dma_device_type": 1 00:11:43.144 }, 00:11:43.144 { 00:11:43.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.144 "dma_device_type": 2 00:11:43.144 } 00:11:43.144 ], 00:11:43.144 "driver_specific": {} 00:11:43.144 }' 00:11:43.144 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:43.144 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:43.144 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:43.144 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:43.144 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:43.144 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:43.144 11:22:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:43.144 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:43.403 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:43.403 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:43.403 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:43.403 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:43.403 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:43.662 [2024-07-25 11:22:59.365876] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.662 [2024-07-25 11:22:59.365927] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.662 [2024-07-25 11:22:59.365996] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.662 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.920 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:43.920 "name": "Existed_Raid", 00:11:43.920 "uuid": "cb4b4de4-2dd0-41e8-962c-fa5fe5d728ab", 00:11:43.920 "strip_size_kb": 64, 00:11:43.920 "state": "offline", 00:11:43.920 "raid_level": "concat", 00:11:43.921 "superblock": false, 00:11:43.921 "num_base_bdevs": 3, 00:11:43.921 "num_base_bdevs_discovered": 2, 00:11:43.921 "num_base_bdevs_operational": 2, 00:11:43.921 "base_bdevs_list": [ 00:11:43.921 { 00:11:43.921 "name": null, 00:11:43.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.921 "is_configured": false, 00:11:43.921 "data_offset": 0, 00:11:43.921 "data_size": 65536 00:11:43.921 }, 00:11:43.921 { 00:11:43.921 "name": "BaseBdev2", 00:11:43.921 "uuid": "d1e26747-7b6f-483d-8932-5aaf3d4bafb8", 00:11:43.921 "is_configured": true, 00:11:43.921 "data_offset": 0, 00:11:43.921 "data_size": 65536 00:11:43.921 }, 00:11:43.921 { 00:11:43.921 "name": "BaseBdev3", 00:11:43.921 "uuid": "30132351-b68e-4de8-8551-7caa3f050eca", 00:11:43.921 "is_configured": true, 00:11:43.921 "data_offset": 0, 00:11:43.921 "data_size": 65536 00:11:43.921 } 00:11:43.921 ] 00:11:43.921 }' 00:11:43.921 11:22:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:43.921 11:22:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.487 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:44.487 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:44.487 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.487 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:44.745 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:44.745 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.745 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:45.003 [2024-07-25 11:23:00.852531] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.262 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:45.262 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:45.262 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.262 11:23:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:45.521 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:45.521 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.521 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:45.779 [2024-07-25 11:23:01.421694] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:45.779 [2024-07-25 11:23:01.421774] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:45.779 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:45.779 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:45.779 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.779 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:46.038 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:46.039 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:46.039 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:46.039 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:46.039 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:46.039 11:23:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.298 BaseBdev2 00:11:46.298 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:46.298 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:46.298 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.298 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:46.298 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.298 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.298 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:46.556 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.813 [ 00:11:46.813 { 00:11:46.813 "name": "BaseBdev2", 00:11:46.813 "aliases": [ 00:11:46.813 "b7376aaa-df1c-47de-acbe-f7d8ae80cf56" 00:11:46.813 ], 00:11:46.813 "product_name": "Malloc disk", 00:11:46.813 "block_size": 512, 00:11:46.813 "num_blocks": 65536, 00:11:46.813 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:11:46.813 "assigned_rate_limits": { 00:11:46.813 "rw_ios_per_sec": 0, 00:11:46.813 "rw_mbytes_per_sec": 0, 00:11:46.813 "r_mbytes_per_sec": 0, 00:11:46.813 "w_mbytes_per_sec": 0 00:11:46.813 }, 00:11:46.813 "claimed": false, 00:11:46.813 "zoned": false, 00:11:46.813 "supported_io_types": { 00:11:46.813 "read": true, 00:11:46.813 "write": true, 00:11:46.813 "unmap": true, 00:11:46.813 "flush": true, 00:11:46.813 "reset": true, 00:11:46.813 "nvme_admin": false, 00:11:46.813 "nvme_io": false, 00:11:46.813 "nvme_io_md": false, 00:11:46.813 "write_zeroes": true, 00:11:46.813 "zcopy": true, 00:11:46.813 "get_zone_info": false, 00:11:46.813 "zone_management": false, 00:11:46.813 "zone_append": false, 00:11:46.813 "compare": false, 00:11:46.813 "compare_and_write": false, 00:11:46.813 "abort": true, 00:11:46.813 "seek_hole": false, 00:11:46.813 "seek_data": false, 00:11:46.813 "copy": true, 00:11:46.813 "nvme_iov_md": false 00:11:46.813 }, 00:11:46.813 "memory_domains": [ 00:11:46.813 { 00:11:46.813 "dma_device_id": "system", 00:11:46.813 "dma_device_type": 1 00:11:46.813 }, 00:11:46.813 { 00:11:46.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.813 "dma_device_type": 2 00:11:46.813 } 00:11:46.813 ], 00:11:46.813 "driver_specific": {} 00:11:46.813 } 00:11:46.813 ] 00:11:46.813 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:46.813 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:46.813 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:46.813 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.380 BaseBdev3 00:11:47.380 11:23:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:47.380 11:23:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:47.380 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.380 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:47.380 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.380 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.380 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:47.380 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.947 [ 00:11:47.947 { 00:11:47.947 "name": "BaseBdev3", 00:11:47.947 "aliases": [ 00:11:47.947 "12dd832f-b39e-4833-958a-4d12d07ed84e" 00:11:47.947 ], 00:11:47.947 "product_name": "Malloc disk", 00:11:47.947 "block_size": 512, 00:11:47.947 "num_blocks": 65536, 00:11:47.947 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:11:47.947 "assigned_rate_limits": { 00:11:47.947 "rw_ios_per_sec": 0, 00:11:47.947 "rw_mbytes_per_sec": 0, 00:11:47.947 "r_mbytes_per_sec": 0, 00:11:47.947 "w_mbytes_per_sec": 0 00:11:47.947 }, 00:11:47.947 "claimed": false, 00:11:47.947 "zoned": false, 00:11:47.947 "supported_io_types": { 00:11:47.947 "read": true, 00:11:47.947 "write": true, 00:11:47.947 "unmap": true, 00:11:47.947 "flush": true, 00:11:47.947 "reset": true, 00:11:47.947 "nvme_admin": false, 00:11:47.947 "nvme_io": false, 00:11:47.947 "nvme_io_md": false, 00:11:47.947 "write_zeroes": true, 00:11:47.947 "zcopy": true, 00:11:47.947 "get_zone_info": false, 00:11:47.947 "zone_management": false, 00:11:47.947 "zone_append": false, 00:11:47.947 "compare": false, 00:11:47.947 "compare_and_write": false, 00:11:47.947 "abort": true, 00:11:47.947 "seek_hole": false, 00:11:47.947 "seek_data": false, 00:11:47.947 "copy": true, 00:11:47.947 "nvme_iov_md": false 00:11:47.947 }, 00:11:47.947 "memory_domains": [ 00:11:47.947 { 00:11:47.947 "dma_device_id": "system", 00:11:47.947 "dma_device_type": 1 00:11:47.947 }, 00:11:47.947 { 00:11:47.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.947 "dma_device_type": 2 00:11:47.947 } 00:11:47.947 ], 00:11:47.947 "driver_specific": {} 00:11:47.947 } 00:11:47.947 ] 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:47.947 [2024-07-25 11:23:03.761845] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.947 [2024-07-25 11:23:03.761918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.947 [2024-07-25 11:23:03.761977] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.947 [2024-07-25 11:23:03.764270] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.947 11:23:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.205 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:48.205 "name": "Existed_Raid", 00:11:48.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.205 "strip_size_kb": 64, 00:11:48.205 "state": "configuring", 00:11:48.205 "raid_level": "concat", 00:11:48.205 "superblock": false, 00:11:48.205 "num_base_bdevs": 3, 00:11:48.205 "num_base_bdevs_discovered": 2, 00:11:48.205 "num_base_bdevs_operational": 3, 00:11:48.205 "base_bdevs_list": [ 00:11:48.205 { 00:11:48.205 "name": "BaseBdev1", 00:11:48.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.205 "is_configured": false, 00:11:48.205 "data_offset": 0, 00:11:48.205 "data_size": 0 00:11:48.205 }, 00:11:48.205 { 00:11:48.205 "name": "BaseBdev2", 00:11:48.205 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:11:48.205 "is_configured": true, 00:11:48.205 "data_offset": 0, 00:11:48.205 "data_size": 65536 00:11:48.205 }, 00:11:48.205 { 00:11:48.205 "name": "BaseBdev3", 00:11:48.205 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:11:48.205 "is_configured": true, 00:11:48.205 "data_offset": 0, 00:11:48.205 "data_size": 65536 00:11:48.205 } 00:11:48.205 ] 00:11:48.205 }' 00:11:48.205 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:48.205 11:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.138 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:49.139 [2024-07-25 11:23:04.958099] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.139 11:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.396 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:49.397 "name": "Existed_Raid", 00:11:49.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.397 "strip_size_kb": 64, 00:11:49.397 "state": "configuring", 00:11:49.397 "raid_level": "concat", 00:11:49.397 "superblock": false, 00:11:49.397 "num_base_bdevs": 3, 00:11:49.397 "num_base_bdevs_discovered": 1, 00:11:49.397 "num_base_bdevs_operational": 3, 00:11:49.397 "base_bdevs_list": [ 00:11:49.397 { 00:11:49.397 "name": "BaseBdev1", 00:11:49.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.397 "is_configured": false, 00:11:49.397 "data_offset": 0, 00:11:49.397 "data_size": 0 00:11:49.397 }, 00:11:49.397 { 00:11:49.397 "name": null, 00:11:49.397 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:11:49.397 "is_configured": false, 00:11:49.397 "data_offset": 0, 00:11:49.397 "data_size": 65536 00:11:49.397 }, 00:11:49.397 { 00:11:49.397 "name": "BaseBdev3", 00:11:49.397 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:11:49.397 "is_configured": true, 00:11:49.397 "data_offset": 0, 00:11:49.397 "data_size": 65536 00:11:49.397 } 00:11:49.397 ] 00:11:49.397 }' 00:11:49.397 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:49.397 11:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.963 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.963 11:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.222 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:50.222 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.826 [2024-07-25 11:23:06.373976] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.826 BaseBdev1 00:11:50.826 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:50.826 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:50.826 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:50.826 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:50.826 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:50.826 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:50.826 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:50.826 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:51.084 [ 00:11:51.084 { 00:11:51.084 "name": "BaseBdev1", 00:11:51.084 "aliases": [ 00:11:51.084 "bc58989f-10ef-44fb-8322-84fee003f775" 00:11:51.084 ], 00:11:51.084 "product_name": "Malloc disk", 00:11:51.084 "block_size": 512, 00:11:51.084 "num_blocks": 65536, 00:11:51.084 "uuid": "bc58989f-10ef-44fb-8322-84fee003f775", 00:11:51.084 "assigned_rate_limits": { 00:11:51.084 "rw_ios_per_sec": 0, 00:11:51.084 "rw_mbytes_per_sec": 0, 00:11:51.084 "r_mbytes_per_sec": 0, 00:11:51.085 "w_mbytes_per_sec": 0 00:11:51.085 }, 00:11:51.085 "claimed": true, 00:11:51.085 "claim_type": "exclusive_write", 00:11:51.085 "zoned": false, 00:11:51.085 "supported_io_types": { 00:11:51.085 "read": true, 00:11:51.085 "write": true, 00:11:51.085 "unmap": true, 00:11:51.085 "flush": true, 00:11:51.085 "reset": true, 00:11:51.085 "nvme_admin": false, 00:11:51.085 "nvme_io": false, 00:11:51.085 "nvme_io_md": false, 00:11:51.085 "write_zeroes": true, 00:11:51.085 "zcopy": true, 00:11:51.085 "get_zone_info": false, 00:11:51.085 "zone_management": false, 00:11:51.085 "zone_append": false, 00:11:51.085 "compare": false, 00:11:51.085 "compare_and_write": false, 00:11:51.085 "abort": true, 00:11:51.085 "seek_hole": false, 00:11:51.085 "seek_data": false, 00:11:51.085 "copy": true, 00:11:51.085 "nvme_iov_md": false 00:11:51.085 }, 00:11:51.085 "memory_domains": [ 00:11:51.085 { 00:11:51.085 "dma_device_id": "system", 00:11:51.085 "dma_device_type": 1 00:11:51.085 }, 00:11:51.085 { 00:11:51.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.085 "dma_device_type": 2 00:11:51.085 } 00:11:51.085 ], 00:11:51.085 "driver_specific": {} 00:11:51.085 } 00:11:51.085 ] 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.085 11:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.651 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:51.651 "name": "Existed_Raid", 00:11:51.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.651 "strip_size_kb": 64, 00:11:51.651 "state": "configuring", 00:11:51.651 "raid_level": "concat", 00:11:51.651 "superblock": false, 00:11:51.651 "num_base_bdevs": 3, 00:11:51.651 "num_base_bdevs_discovered": 2, 00:11:51.651 "num_base_bdevs_operational": 3, 00:11:51.651 "base_bdevs_list": [ 00:11:51.651 { 00:11:51.651 "name": "BaseBdev1", 00:11:51.651 "uuid": "bc58989f-10ef-44fb-8322-84fee003f775", 00:11:51.651 "is_configured": true, 00:11:51.651 "data_offset": 0, 00:11:51.651 "data_size": 65536 00:11:51.651 }, 00:11:51.651 { 00:11:51.651 "name": null, 00:11:51.651 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:11:51.651 "is_configured": false, 00:11:51.651 "data_offset": 0, 00:11:51.651 "data_size": 65536 00:11:51.651 }, 00:11:51.651 { 00:11:51.651 "name": "BaseBdev3", 00:11:51.651 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:11:51.651 "is_configured": true, 00:11:51.651 "data_offset": 0, 00:11:51.651 "data_size": 65536 00:11:51.651 } 00:11:51.651 ] 00:11:51.651 }' 00:11:51.651 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:51.651 11:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.217 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.217 11:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:52.475 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:52.475 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:52.475 [2024-07-25 11:23:08.354592] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.733 "name": "Existed_Raid", 00:11:52.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.733 "strip_size_kb": 64, 00:11:52.733 "state": "configuring", 00:11:52.733 "raid_level": "concat", 00:11:52.733 "superblock": false, 00:11:52.733 "num_base_bdevs": 3, 00:11:52.733 "num_base_bdevs_discovered": 1, 00:11:52.733 "num_base_bdevs_operational": 3, 00:11:52.733 "base_bdevs_list": [ 00:11:52.733 { 00:11:52.733 "name": "BaseBdev1", 00:11:52.733 "uuid": "bc58989f-10ef-44fb-8322-84fee003f775", 00:11:52.733 "is_configured": true, 00:11:52.733 "data_offset": 0, 00:11:52.733 "data_size": 65536 00:11:52.733 }, 00:11:52.733 { 00:11:52.733 "name": null, 00:11:52.733 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:11:52.733 "is_configured": false, 00:11:52.733 "data_offset": 0, 00:11:52.733 "data_size": 65536 00:11:52.733 }, 00:11:52.733 { 00:11:52.733 "name": null, 00:11:52.733 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:11:52.733 "is_configured": false, 00:11:52.733 "data_offset": 0, 00:11:52.733 "data_size": 65536 00:11:52.733 } 00:11:52.733 ] 00:11:52.733 }' 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.733 11:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.668 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:53.668 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.668 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:53.668 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:53.927 [2024-07-25 11:23:09.803043] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.186 11:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.444 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.444 "name": "Existed_Raid", 00:11:54.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.444 "strip_size_kb": 64, 00:11:54.444 "state": "configuring", 00:11:54.444 "raid_level": "concat", 00:11:54.444 "superblock": false, 00:11:54.444 "num_base_bdevs": 3, 00:11:54.444 "num_base_bdevs_discovered": 2, 00:11:54.444 "num_base_bdevs_operational": 3, 00:11:54.444 "base_bdevs_list": [ 00:11:54.444 { 00:11:54.444 "name": "BaseBdev1", 00:11:54.444 "uuid": "bc58989f-10ef-44fb-8322-84fee003f775", 00:11:54.444 "is_configured": true, 00:11:54.444 "data_offset": 0, 00:11:54.444 "data_size": 65536 00:11:54.444 }, 00:11:54.444 { 00:11:54.444 "name": null, 00:11:54.444 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:11:54.444 "is_configured": false, 00:11:54.444 "data_offset": 0, 00:11:54.444 "data_size": 65536 00:11:54.444 }, 00:11:54.444 { 00:11:54.444 "name": "BaseBdev3", 00:11:54.444 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:11:54.444 "is_configured": true, 00:11:54.444 "data_offset": 0, 00:11:54.444 "data_size": 65536 00:11:54.444 } 00:11:54.444 ] 00:11:54.444 }' 00:11:54.444 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.444 11:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.012 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.012 11:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:55.271 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:55.271 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:55.530 [2024-07-25 11:23:11.283460] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.530 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.789 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.789 "name": "Existed_Raid", 00:11:55.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.789 "strip_size_kb": 64, 00:11:55.789 "state": "configuring", 00:11:55.789 "raid_level": "concat", 00:11:55.789 "superblock": false, 00:11:55.789 "num_base_bdevs": 3, 00:11:55.789 "num_base_bdevs_discovered": 1, 00:11:55.789 "num_base_bdevs_operational": 3, 00:11:55.789 "base_bdevs_list": [ 00:11:55.789 { 00:11:55.789 "name": null, 00:11:55.789 "uuid": "bc58989f-10ef-44fb-8322-84fee003f775", 00:11:55.789 "is_configured": false, 00:11:55.789 "data_offset": 0, 00:11:55.789 "data_size": 65536 00:11:55.789 }, 00:11:55.789 { 00:11:55.789 "name": null, 00:11:55.789 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:11:55.789 "is_configured": false, 00:11:55.789 "data_offset": 0, 00:11:55.789 "data_size": 65536 00:11:55.789 }, 00:11:55.789 { 00:11:55.789 "name": "BaseBdev3", 00:11:55.789 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:11:55.789 "is_configured": true, 00:11:55.789 "data_offset": 0, 00:11:55.789 "data_size": 65536 00:11:55.789 } 00:11:55.789 ] 00:11:55.789 }' 00:11:55.789 11:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.789 11:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.356 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.356 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:56.922 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:56.922 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:57.181 [2024-07-25 11:23:12.806720] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.181 11:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.181 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:57.181 "name": "Existed_Raid", 00:11:57.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.181 "strip_size_kb": 64, 00:11:57.181 "state": "configuring", 00:11:57.181 "raid_level": "concat", 00:11:57.181 "superblock": false, 00:11:57.181 "num_base_bdevs": 3, 00:11:57.181 "num_base_bdevs_discovered": 2, 00:11:57.181 "num_base_bdevs_operational": 3, 00:11:57.181 "base_bdevs_list": [ 00:11:57.181 { 00:11:57.181 "name": null, 00:11:57.181 "uuid": "bc58989f-10ef-44fb-8322-84fee003f775", 00:11:57.181 "is_configured": false, 00:11:57.181 "data_offset": 0, 00:11:57.181 "data_size": 65536 00:11:57.181 }, 00:11:57.181 { 00:11:57.181 "name": "BaseBdev2", 00:11:57.181 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:11:57.181 "is_configured": true, 00:11:57.181 "data_offset": 0, 00:11:57.181 "data_size": 65536 00:11:57.181 }, 00:11:57.181 { 00:11:57.181 "name": "BaseBdev3", 00:11:57.181 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:11:57.181 "is_configured": true, 00:11:57.181 "data_offset": 0, 00:11:57.181 "data_size": 65536 00:11:57.181 } 00:11:57.181 ] 00:11:57.181 }' 00:11:57.181 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:57.181 11:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.116 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.116 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:58.116 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:58.116 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.116 11:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:58.374 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bc58989f-10ef-44fb-8322-84fee003f775 00:11:58.632 [2024-07-25 11:23:14.390446] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:58.632 [2024-07-25 11:23:14.390532] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:58.632 [2024-07-25 11:23:14.390546] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:58.632 NewBaseBdev 00:11:58.632 [2024-07-25 11:23:14.390954] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:58.632 [2024-07-25 11:23:14.391158] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:58.632 [2024-07-25 11:23:14.391180] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:58.632 [2024-07-25 11:23:14.391497] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.632 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:58.632 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:58.632 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:58.632 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:58.632 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:58.632 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:58.632 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:58.890 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:59.148 [ 00:11:59.148 { 00:11:59.148 "name": "NewBaseBdev", 00:11:59.148 "aliases": [ 00:11:59.148 "bc58989f-10ef-44fb-8322-84fee003f775" 00:11:59.148 ], 00:11:59.148 "product_name": "Malloc disk", 00:11:59.148 "block_size": 512, 00:11:59.148 "num_blocks": 65536, 00:11:59.148 "uuid": "bc58989f-10ef-44fb-8322-84fee003f775", 00:11:59.148 "assigned_rate_limits": { 00:11:59.148 "rw_ios_per_sec": 0, 00:11:59.148 "rw_mbytes_per_sec": 0, 00:11:59.148 "r_mbytes_per_sec": 0, 00:11:59.148 "w_mbytes_per_sec": 0 00:11:59.148 }, 00:11:59.148 "claimed": true, 00:11:59.148 "claim_type": "exclusive_write", 00:11:59.148 "zoned": false, 00:11:59.148 "supported_io_types": { 00:11:59.148 "read": true, 00:11:59.148 "write": true, 00:11:59.148 "unmap": true, 00:11:59.148 "flush": true, 00:11:59.148 "reset": true, 00:11:59.148 "nvme_admin": false, 00:11:59.148 "nvme_io": false, 00:11:59.148 "nvme_io_md": false, 00:11:59.148 "write_zeroes": true, 00:11:59.148 "zcopy": true, 00:11:59.148 "get_zone_info": false, 00:11:59.148 "zone_management": false, 00:11:59.148 "zone_append": false, 00:11:59.148 "compare": false, 00:11:59.148 "compare_and_write": false, 00:11:59.148 "abort": true, 00:11:59.148 "seek_hole": false, 00:11:59.148 "seek_data": false, 00:11:59.148 "copy": true, 00:11:59.148 "nvme_iov_md": false 00:11:59.148 }, 00:11:59.148 "memory_domains": [ 00:11:59.148 { 00:11:59.148 "dma_device_id": "system", 00:11:59.148 "dma_device_type": 1 00:11:59.148 }, 00:11:59.148 { 00:11:59.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.148 "dma_device_type": 2 00:11:59.148 } 00:11:59.148 ], 00:11:59.148 "driver_specific": {} 00:11:59.148 } 00:11:59.148 ] 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.148 11:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.406 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:59.406 "name": "Existed_Raid", 00:11:59.406 "uuid": "25bd9ffb-d50d-42e3-b893-979cc273463e", 00:11:59.406 "strip_size_kb": 64, 00:11:59.406 "state": "online", 00:11:59.406 "raid_level": "concat", 00:11:59.406 "superblock": false, 00:11:59.406 "num_base_bdevs": 3, 00:11:59.406 "num_base_bdevs_discovered": 3, 00:11:59.406 "num_base_bdevs_operational": 3, 00:11:59.406 "base_bdevs_list": [ 00:11:59.406 { 00:11:59.406 "name": "NewBaseBdev", 00:11:59.406 "uuid": "bc58989f-10ef-44fb-8322-84fee003f775", 00:11:59.406 "is_configured": true, 00:11:59.406 "data_offset": 0, 00:11:59.406 "data_size": 65536 00:11:59.406 }, 00:11:59.406 { 00:11:59.406 "name": "BaseBdev2", 00:11:59.406 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:11:59.406 "is_configured": true, 00:11:59.407 "data_offset": 0, 00:11:59.407 "data_size": 65536 00:11:59.407 }, 00:11:59.407 { 00:11:59.407 "name": "BaseBdev3", 00:11:59.407 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:11:59.407 "is_configured": true, 00:11:59.407 "data_offset": 0, 00:11:59.407 "data_size": 65536 00:11:59.407 } 00:11:59.407 ] 00:11:59.407 }' 00:11:59.407 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:59.407 11:23:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.974 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.974 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:59.974 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:59.974 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:59.974 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:59.974 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:59.974 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:59.974 11:23:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:00.232 [2024-07-25 11:23:16.011478] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.232 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:00.232 "name": "Existed_Raid", 00:12:00.232 "aliases": [ 00:12:00.232 "25bd9ffb-d50d-42e3-b893-979cc273463e" 00:12:00.232 ], 00:12:00.232 "product_name": "Raid Volume", 00:12:00.232 "block_size": 512, 00:12:00.232 "num_blocks": 196608, 00:12:00.232 "uuid": "25bd9ffb-d50d-42e3-b893-979cc273463e", 00:12:00.232 "assigned_rate_limits": { 00:12:00.232 "rw_ios_per_sec": 0, 00:12:00.232 "rw_mbytes_per_sec": 0, 00:12:00.232 "r_mbytes_per_sec": 0, 00:12:00.232 "w_mbytes_per_sec": 0 00:12:00.232 }, 00:12:00.232 "claimed": false, 00:12:00.232 "zoned": false, 00:12:00.232 "supported_io_types": { 00:12:00.232 "read": true, 00:12:00.232 "write": true, 00:12:00.232 "unmap": true, 00:12:00.232 "flush": true, 00:12:00.232 "reset": true, 00:12:00.232 "nvme_admin": false, 00:12:00.232 "nvme_io": false, 00:12:00.232 "nvme_io_md": false, 00:12:00.232 "write_zeroes": true, 00:12:00.232 "zcopy": false, 00:12:00.232 "get_zone_info": false, 00:12:00.232 "zone_management": false, 00:12:00.232 "zone_append": false, 00:12:00.232 "compare": false, 00:12:00.232 "compare_and_write": false, 00:12:00.232 "abort": false, 00:12:00.232 "seek_hole": false, 00:12:00.232 "seek_data": false, 00:12:00.232 "copy": false, 00:12:00.232 "nvme_iov_md": false 00:12:00.232 }, 00:12:00.232 "memory_domains": [ 00:12:00.232 { 00:12:00.232 "dma_device_id": "system", 00:12:00.232 "dma_device_type": 1 00:12:00.232 }, 00:12:00.232 { 00:12:00.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.232 "dma_device_type": 2 00:12:00.232 }, 00:12:00.232 { 00:12:00.232 "dma_device_id": "system", 00:12:00.232 "dma_device_type": 1 00:12:00.232 }, 00:12:00.232 { 00:12:00.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.232 "dma_device_type": 2 00:12:00.232 }, 00:12:00.232 { 00:12:00.232 "dma_device_id": "system", 00:12:00.232 "dma_device_type": 1 00:12:00.232 }, 00:12:00.232 { 00:12:00.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.232 "dma_device_type": 2 00:12:00.232 } 00:12:00.232 ], 00:12:00.232 "driver_specific": { 00:12:00.232 "raid": { 00:12:00.232 "uuid": "25bd9ffb-d50d-42e3-b893-979cc273463e", 00:12:00.232 "strip_size_kb": 64, 00:12:00.232 "state": "online", 00:12:00.232 "raid_level": "concat", 00:12:00.232 "superblock": false, 00:12:00.232 "num_base_bdevs": 3, 00:12:00.232 "num_base_bdevs_discovered": 3, 00:12:00.232 "num_base_bdevs_operational": 3, 00:12:00.232 "base_bdevs_list": [ 00:12:00.232 { 00:12:00.232 "name": "NewBaseBdev", 00:12:00.232 "uuid": "bc58989f-10ef-44fb-8322-84fee003f775", 00:12:00.232 "is_configured": true, 00:12:00.232 "data_offset": 0, 00:12:00.232 "data_size": 65536 00:12:00.232 }, 00:12:00.232 { 00:12:00.232 "name": "BaseBdev2", 00:12:00.232 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:12:00.232 "is_configured": true, 00:12:00.232 "data_offset": 0, 00:12:00.232 "data_size": 65536 00:12:00.232 }, 00:12:00.232 { 00:12:00.232 "name": "BaseBdev3", 00:12:00.232 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:12:00.232 "is_configured": true, 00:12:00.232 "data_offset": 0, 00:12:00.232 "data_size": 65536 00:12:00.232 } 00:12:00.232 ] 00:12:00.232 } 00:12:00.232 } 00:12:00.232 }' 00:12:00.232 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.232 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:00.232 BaseBdev2 00:12:00.232 BaseBdev3' 00:12:00.232 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:00.232 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:00.232 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:00.491 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:00.491 "name": "NewBaseBdev", 00:12:00.491 "aliases": [ 00:12:00.491 "bc58989f-10ef-44fb-8322-84fee003f775" 00:12:00.491 ], 00:12:00.491 "product_name": "Malloc disk", 00:12:00.491 "block_size": 512, 00:12:00.491 "num_blocks": 65536, 00:12:00.491 "uuid": "bc58989f-10ef-44fb-8322-84fee003f775", 00:12:00.491 "assigned_rate_limits": { 00:12:00.491 "rw_ios_per_sec": 0, 00:12:00.491 "rw_mbytes_per_sec": 0, 00:12:00.491 "r_mbytes_per_sec": 0, 00:12:00.491 "w_mbytes_per_sec": 0 00:12:00.491 }, 00:12:00.491 "claimed": true, 00:12:00.491 "claim_type": "exclusive_write", 00:12:00.491 "zoned": false, 00:12:00.491 "supported_io_types": { 00:12:00.491 "read": true, 00:12:00.491 "write": true, 00:12:00.491 "unmap": true, 00:12:00.491 "flush": true, 00:12:00.491 "reset": true, 00:12:00.491 "nvme_admin": false, 00:12:00.491 "nvme_io": false, 00:12:00.491 "nvme_io_md": false, 00:12:00.491 "write_zeroes": true, 00:12:00.491 "zcopy": true, 00:12:00.491 "get_zone_info": false, 00:12:00.491 "zone_management": false, 00:12:00.491 "zone_append": false, 00:12:00.491 "compare": false, 00:12:00.491 "compare_and_write": false, 00:12:00.491 "abort": true, 00:12:00.491 "seek_hole": false, 00:12:00.491 "seek_data": false, 00:12:00.491 "copy": true, 00:12:00.491 "nvme_iov_md": false 00:12:00.491 }, 00:12:00.491 "memory_domains": [ 00:12:00.491 { 00:12:00.491 "dma_device_id": "system", 00:12:00.491 "dma_device_type": 1 00:12:00.491 }, 00:12:00.491 { 00:12:00.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.491 "dma_device_type": 2 00:12:00.491 } 00:12:00.491 ], 00:12:00.491 "driver_specific": {} 00:12:00.491 }' 00:12:00.491 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:00.749 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:00.749 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:00.749 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:00.749 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:00.749 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:00.749 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:00.749 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:00.749 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:00.749 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:01.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:01.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:01.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:01.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:01.007 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:01.265 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:01.265 "name": "BaseBdev2", 00:12:01.265 "aliases": [ 00:12:01.265 "b7376aaa-df1c-47de-acbe-f7d8ae80cf56" 00:12:01.265 ], 00:12:01.265 "product_name": "Malloc disk", 00:12:01.265 "block_size": 512, 00:12:01.266 "num_blocks": 65536, 00:12:01.266 "uuid": "b7376aaa-df1c-47de-acbe-f7d8ae80cf56", 00:12:01.266 "assigned_rate_limits": { 00:12:01.266 "rw_ios_per_sec": 0, 00:12:01.266 "rw_mbytes_per_sec": 0, 00:12:01.266 "r_mbytes_per_sec": 0, 00:12:01.266 "w_mbytes_per_sec": 0 00:12:01.266 }, 00:12:01.266 "claimed": true, 00:12:01.266 "claim_type": "exclusive_write", 00:12:01.266 "zoned": false, 00:12:01.266 "supported_io_types": { 00:12:01.266 "read": true, 00:12:01.266 "write": true, 00:12:01.266 "unmap": true, 00:12:01.266 "flush": true, 00:12:01.266 "reset": true, 00:12:01.266 "nvme_admin": false, 00:12:01.266 "nvme_io": false, 00:12:01.266 "nvme_io_md": false, 00:12:01.266 "write_zeroes": true, 00:12:01.266 "zcopy": true, 00:12:01.266 "get_zone_info": false, 00:12:01.266 "zone_management": false, 00:12:01.266 "zone_append": false, 00:12:01.266 "compare": false, 00:12:01.266 "compare_and_write": false, 00:12:01.266 "abort": true, 00:12:01.266 "seek_hole": false, 00:12:01.266 "seek_data": false, 00:12:01.266 "copy": true, 00:12:01.266 "nvme_iov_md": false 00:12:01.266 }, 00:12:01.266 "memory_domains": [ 00:12:01.266 { 00:12:01.266 "dma_device_id": "system", 00:12:01.266 "dma_device_type": 1 00:12:01.266 }, 00:12:01.266 { 00:12:01.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.266 "dma_device_type": 2 00:12:01.266 } 00:12:01.266 ], 00:12:01.266 "driver_specific": {} 00:12:01.266 }' 00:12:01.266 11:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:01.266 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:01.266 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:01.266 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:01.524 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:01.782 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:01.782 "name": "BaseBdev3", 00:12:01.782 "aliases": [ 00:12:01.782 "12dd832f-b39e-4833-958a-4d12d07ed84e" 00:12:01.782 ], 00:12:01.782 "product_name": "Malloc disk", 00:12:01.782 "block_size": 512, 00:12:01.782 "num_blocks": 65536, 00:12:01.782 "uuid": "12dd832f-b39e-4833-958a-4d12d07ed84e", 00:12:01.782 "assigned_rate_limits": { 00:12:01.782 "rw_ios_per_sec": 0, 00:12:01.782 "rw_mbytes_per_sec": 0, 00:12:01.782 "r_mbytes_per_sec": 0, 00:12:01.782 "w_mbytes_per_sec": 0 00:12:01.782 }, 00:12:01.782 "claimed": true, 00:12:01.782 "claim_type": "exclusive_write", 00:12:01.782 "zoned": false, 00:12:01.782 "supported_io_types": { 00:12:01.782 "read": true, 00:12:01.782 "write": true, 00:12:01.782 "unmap": true, 00:12:01.782 "flush": true, 00:12:01.782 "reset": true, 00:12:01.782 "nvme_admin": false, 00:12:01.782 "nvme_io": false, 00:12:01.782 "nvme_io_md": false, 00:12:01.782 "write_zeroes": true, 00:12:01.782 "zcopy": true, 00:12:01.782 "get_zone_info": false, 00:12:01.782 "zone_management": false, 00:12:01.782 "zone_append": false, 00:12:01.783 "compare": false, 00:12:01.783 "compare_and_write": false, 00:12:01.783 "abort": true, 00:12:01.783 "seek_hole": false, 00:12:01.783 "seek_data": false, 00:12:01.783 "copy": true, 00:12:01.783 "nvme_iov_md": false 00:12:01.783 }, 00:12:01.783 "memory_domains": [ 00:12:01.783 { 00:12:01.783 "dma_device_id": "system", 00:12:01.783 "dma_device_type": 1 00:12:01.783 }, 00:12:01.783 { 00:12:01.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.783 "dma_device_type": 2 00:12:01.783 } 00:12:01.783 ], 00:12:01.783 "driver_specific": {} 00:12:01.783 }' 00:12:01.783 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.041 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.041 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:02.041 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.041 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.041 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:02.041 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.041 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.041 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:02.041 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.300 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.300 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:02.300 11:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:02.300 [2024-07-25 11:23:18.167735] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.300 [2024-07-25 11:23:18.167804] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.300 [2024-07-25 11:23:18.167931] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.300 [2024-07-25 11:23:18.168028] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.300 [2024-07-25 11:23:18.168066] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 70074 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 70074 ']' 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 70074 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70074 00:12:02.559 killing process with pid 70074 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70074' 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 70074 00:12:02.559 [2024-07-25 11:23:18.209138] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.559 11:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 70074 00:12:02.818 [2024-07-25 11:23:18.506130] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.241 ************************************ 00:12:04.241 END TEST raid_state_function_test 00:12:04.241 ************************************ 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:04.241 00:12:04.241 real 0m32.284s 00:12:04.241 user 0m58.984s 00:12:04.241 sys 0m4.088s 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.241 11:23:19 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:04.241 11:23:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:04.241 11:23:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.241 11:23:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.241 ************************************ 00:12:04.241 START TEST raid_state_function_test_sb 00:12:04.241 ************************************ 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:04.241 Process raid pid: 71049 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=71049 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 71049' 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 71049 /var/tmp/spdk-raid.sock 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 71049 ']' 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:04.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.241 11:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.241 [2024-07-25 11:23:20.019401] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:12:04.241 [2024-07-25 11:23:20.019819] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.500 [2024-07-25 11:23:20.192584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.758 [2024-07-25 11:23:20.470141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.017 [2024-07-25 11:23:20.701460] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.017 [2024-07-25 11:23:20.701529] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.275 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:05.275 11:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:05.275 11:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:05.275 [2024-07-25 11:23:21.108312] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.275 [2024-07-25 11:23:21.108413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.275 [2024-07-25 11:23:21.108435] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:05.275 [2024-07-25 11:23:21.108449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:05.275 [2024-07-25 11:23:21.108464] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:05.275 [2024-07-25 11:23:21.108476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.275 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.534 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:05.534 "name": "Existed_Raid", 00:12:05.534 "uuid": "7a9a7d5e-f3f1-4167-bd84-5ba3e0ed22a2", 00:12:05.534 "strip_size_kb": 64, 00:12:05.534 "state": "configuring", 00:12:05.534 "raid_level": "concat", 00:12:05.534 "superblock": true, 00:12:05.534 "num_base_bdevs": 3, 00:12:05.534 "num_base_bdevs_discovered": 0, 00:12:05.534 "num_base_bdevs_operational": 3, 00:12:05.534 "base_bdevs_list": [ 00:12:05.534 { 00:12:05.534 "name": "BaseBdev1", 00:12:05.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.534 "is_configured": false, 00:12:05.534 "data_offset": 0, 00:12:05.534 "data_size": 0 00:12:05.534 }, 00:12:05.534 { 00:12:05.534 "name": "BaseBdev2", 00:12:05.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.534 "is_configured": false, 00:12:05.534 "data_offset": 0, 00:12:05.534 "data_size": 0 00:12:05.534 }, 00:12:05.534 { 00:12:05.534 "name": "BaseBdev3", 00:12:05.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.534 "is_configured": false, 00:12:05.534 "data_offset": 0, 00:12:05.534 "data_size": 0 00:12:05.534 } 00:12:05.534 ] 00:12:05.534 }' 00:12:05.534 11:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:05.534 11:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.470 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:06.470 [2024-07-25 11:23:22.320422] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:06.470 [2024-07-25 11:23:22.320505] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:06.470 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:06.728 [2024-07-25 11:23:22.584553] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:06.728 [2024-07-25 11:23:22.584667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:06.728 [2024-07-25 11:23:22.584692] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:06.728 [2024-07-25 11:23:22.584706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:06.728 [2024-07-25 11:23:22.584720] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:06.728 [2024-07-25 11:23:22.584732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:06.728 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.378 [2024-07-25 11:23:22.980652] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.378 BaseBdev1 00:12:07.378 11:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:07.378 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:07.378 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:07.378 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:07.378 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:07.378 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:07.378 11:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:07.378 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.636 [ 00:12:07.636 { 00:12:07.636 "name": "BaseBdev1", 00:12:07.636 "aliases": [ 00:12:07.636 "65b14227-d436-4042-b1f3-adc0fe9f0fe3" 00:12:07.636 ], 00:12:07.636 "product_name": "Malloc disk", 00:12:07.636 "block_size": 512, 00:12:07.636 "num_blocks": 65536, 00:12:07.636 "uuid": "65b14227-d436-4042-b1f3-adc0fe9f0fe3", 00:12:07.636 "assigned_rate_limits": { 00:12:07.636 "rw_ios_per_sec": 0, 00:12:07.636 "rw_mbytes_per_sec": 0, 00:12:07.636 "r_mbytes_per_sec": 0, 00:12:07.636 "w_mbytes_per_sec": 0 00:12:07.636 }, 00:12:07.636 "claimed": true, 00:12:07.636 "claim_type": "exclusive_write", 00:12:07.636 "zoned": false, 00:12:07.636 "supported_io_types": { 00:12:07.636 "read": true, 00:12:07.636 "write": true, 00:12:07.636 "unmap": true, 00:12:07.636 "flush": true, 00:12:07.636 "reset": true, 00:12:07.636 "nvme_admin": false, 00:12:07.636 "nvme_io": false, 00:12:07.636 "nvme_io_md": false, 00:12:07.636 "write_zeroes": true, 00:12:07.636 "zcopy": true, 00:12:07.636 "get_zone_info": false, 00:12:07.636 "zone_management": false, 00:12:07.636 "zone_append": false, 00:12:07.636 "compare": false, 00:12:07.636 "compare_and_write": false, 00:12:07.636 "abort": true, 00:12:07.636 "seek_hole": false, 00:12:07.636 "seek_data": false, 00:12:07.636 "copy": true, 00:12:07.636 "nvme_iov_md": false 00:12:07.636 }, 00:12:07.636 "memory_domains": [ 00:12:07.636 { 00:12:07.636 "dma_device_id": "system", 00:12:07.636 "dma_device_type": 1 00:12:07.636 }, 00:12:07.636 { 00:12:07.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.636 "dma_device_type": 2 00:12:07.636 } 00:12:07.636 ], 00:12:07.636 "driver_specific": {} 00:12:07.636 } 00:12:07.636 ] 00:12:07.636 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:07.636 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:07.636 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:07.636 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:07.636 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:07.636 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:07.636 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:07.636 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:07.636 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:07.637 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:07.637 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:07.637 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.637 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.201 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.201 "name": "Existed_Raid", 00:12:08.201 "uuid": "eaac8029-878d-4489-b1dd-f45e18ebb9f0", 00:12:08.201 "strip_size_kb": 64, 00:12:08.201 "state": "configuring", 00:12:08.201 "raid_level": "concat", 00:12:08.201 "superblock": true, 00:12:08.201 "num_base_bdevs": 3, 00:12:08.201 "num_base_bdevs_discovered": 1, 00:12:08.201 "num_base_bdevs_operational": 3, 00:12:08.201 "base_bdevs_list": [ 00:12:08.201 { 00:12:08.201 "name": "BaseBdev1", 00:12:08.201 "uuid": "65b14227-d436-4042-b1f3-adc0fe9f0fe3", 00:12:08.201 "is_configured": true, 00:12:08.201 "data_offset": 2048, 00:12:08.201 "data_size": 63488 00:12:08.201 }, 00:12:08.201 { 00:12:08.201 "name": "BaseBdev2", 00:12:08.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.201 "is_configured": false, 00:12:08.201 "data_offset": 0, 00:12:08.201 "data_size": 0 00:12:08.201 }, 00:12:08.201 { 00:12:08.201 "name": "BaseBdev3", 00:12:08.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.201 "is_configured": false, 00:12:08.201 "data_offset": 0, 00:12:08.201 "data_size": 0 00:12:08.201 } 00:12:08.201 ] 00:12:08.201 }' 00:12:08.201 11:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.201 11:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.767 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:09.025 [2024-07-25 11:23:24.741235] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.025 [2024-07-25 11:23:24.741362] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:09.025 11:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:09.284 [2024-07-25 11:23:25.009316] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.284 [2024-07-25 11:23:25.011653] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.284 [2024-07-25 11:23:25.011701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.284 [2024-07-25 11:23:25.011721] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.284 [2024-07-25 11:23:25.011735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.284 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.543 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:09.543 "name": "Existed_Raid", 00:12:09.543 "uuid": "51754c3c-407f-4188-9fc1-220c9c53a698", 00:12:09.543 "strip_size_kb": 64, 00:12:09.543 "state": "configuring", 00:12:09.543 "raid_level": "concat", 00:12:09.543 "superblock": true, 00:12:09.543 "num_base_bdevs": 3, 00:12:09.543 "num_base_bdevs_discovered": 1, 00:12:09.543 "num_base_bdevs_operational": 3, 00:12:09.543 "base_bdevs_list": [ 00:12:09.543 { 00:12:09.543 "name": "BaseBdev1", 00:12:09.543 "uuid": "65b14227-d436-4042-b1f3-adc0fe9f0fe3", 00:12:09.543 "is_configured": true, 00:12:09.543 "data_offset": 2048, 00:12:09.543 "data_size": 63488 00:12:09.543 }, 00:12:09.543 { 00:12:09.544 "name": "BaseBdev2", 00:12:09.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.544 "is_configured": false, 00:12:09.544 "data_offset": 0, 00:12:09.544 "data_size": 0 00:12:09.544 }, 00:12:09.544 { 00:12:09.544 "name": "BaseBdev3", 00:12:09.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.544 "is_configured": false, 00:12:09.544 "data_offset": 0, 00:12:09.544 "data_size": 0 00:12:09.544 } 00:12:09.544 ] 00:12:09.544 }' 00:12:09.544 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:09.544 11:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.111 11:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:10.369 [2024-07-25 11:23:26.157235] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.369 BaseBdev2 00:12:10.369 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:10.369 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:10.369 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:10.369 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:10.369 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:10.369 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:10.369 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:10.648 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:10.907 [ 00:12:10.907 { 00:12:10.907 "name": "BaseBdev2", 00:12:10.907 "aliases": [ 00:12:10.907 "8e48f802-d674-4c24-83d4-4358b68fc273" 00:12:10.907 ], 00:12:10.907 "product_name": "Malloc disk", 00:12:10.907 "block_size": 512, 00:12:10.907 "num_blocks": 65536, 00:12:10.907 "uuid": "8e48f802-d674-4c24-83d4-4358b68fc273", 00:12:10.907 "assigned_rate_limits": { 00:12:10.907 "rw_ios_per_sec": 0, 00:12:10.907 "rw_mbytes_per_sec": 0, 00:12:10.907 "r_mbytes_per_sec": 0, 00:12:10.907 "w_mbytes_per_sec": 0 00:12:10.907 }, 00:12:10.907 "claimed": true, 00:12:10.907 "claim_type": "exclusive_write", 00:12:10.907 "zoned": false, 00:12:10.907 "supported_io_types": { 00:12:10.907 "read": true, 00:12:10.907 "write": true, 00:12:10.907 "unmap": true, 00:12:10.907 "flush": true, 00:12:10.907 "reset": true, 00:12:10.907 "nvme_admin": false, 00:12:10.907 "nvme_io": false, 00:12:10.907 "nvme_io_md": false, 00:12:10.907 "write_zeroes": true, 00:12:10.907 "zcopy": true, 00:12:10.907 "get_zone_info": false, 00:12:10.907 "zone_management": false, 00:12:10.907 "zone_append": false, 00:12:10.907 "compare": false, 00:12:10.907 "compare_and_write": false, 00:12:10.907 "abort": true, 00:12:10.907 "seek_hole": false, 00:12:10.907 "seek_data": false, 00:12:10.907 "copy": true, 00:12:10.907 "nvme_iov_md": false 00:12:10.907 }, 00:12:10.907 "memory_domains": [ 00:12:10.907 { 00:12:10.907 "dma_device_id": "system", 00:12:10.907 "dma_device_type": 1 00:12:10.907 }, 00:12:10.907 { 00:12:10.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.907 "dma_device_type": 2 00:12:10.907 } 00:12:10.907 ], 00:12:10.907 "driver_specific": {} 00:12:10.907 } 00:12:10.907 ] 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.907 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.165 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.165 "name": "Existed_Raid", 00:12:11.165 "uuid": "51754c3c-407f-4188-9fc1-220c9c53a698", 00:12:11.165 "strip_size_kb": 64, 00:12:11.165 "state": "configuring", 00:12:11.165 "raid_level": "concat", 00:12:11.165 "superblock": true, 00:12:11.165 "num_base_bdevs": 3, 00:12:11.165 "num_base_bdevs_discovered": 2, 00:12:11.165 "num_base_bdevs_operational": 3, 00:12:11.165 "base_bdevs_list": [ 00:12:11.165 { 00:12:11.165 "name": "BaseBdev1", 00:12:11.165 "uuid": "65b14227-d436-4042-b1f3-adc0fe9f0fe3", 00:12:11.165 "is_configured": true, 00:12:11.165 "data_offset": 2048, 00:12:11.165 "data_size": 63488 00:12:11.165 }, 00:12:11.165 { 00:12:11.165 "name": "BaseBdev2", 00:12:11.165 "uuid": "8e48f802-d674-4c24-83d4-4358b68fc273", 00:12:11.165 "is_configured": true, 00:12:11.165 "data_offset": 2048, 00:12:11.165 "data_size": 63488 00:12:11.165 }, 00:12:11.165 { 00:12:11.165 "name": "BaseBdev3", 00:12:11.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.165 "is_configured": false, 00:12:11.165 "data_offset": 0, 00:12:11.165 "data_size": 0 00:12:11.165 } 00:12:11.165 ] 00:12:11.165 }' 00:12:11.165 11:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.165 11:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.732 11:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:11.990 [2024-07-25 11:23:27.812134] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.990 [2024-07-25 11:23:27.812722] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:11.990 [2024-07-25 11:23:27.812866] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:11.990 [2024-07-25 11:23:27.813258] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:11.990 BaseBdev3 00:12:11.990 [2024-07-25 11:23:27.813582] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:11.990 [2024-07-25 11:23:27.813612] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:11.990 [2024-07-25 11:23:27.813827] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.990 11:23:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:11.990 11:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:11.990 11:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:11.990 11:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:11.990 11:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:11.990 11:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:11.990 11:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:12.248 11:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:12.507 [ 00:12:12.507 { 00:12:12.507 "name": "BaseBdev3", 00:12:12.507 "aliases": [ 00:12:12.507 "1697ad9e-f8f3-4c5e-a408-5afee05b7e26" 00:12:12.507 ], 00:12:12.507 "product_name": "Malloc disk", 00:12:12.507 "block_size": 512, 00:12:12.507 "num_blocks": 65536, 00:12:12.507 "uuid": "1697ad9e-f8f3-4c5e-a408-5afee05b7e26", 00:12:12.507 "assigned_rate_limits": { 00:12:12.507 "rw_ios_per_sec": 0, 00:12:12.507 "rw_mbytes_per_sec": 0, 00:12:12.507 "r_mbytes_per_sec": 0, 00:12:12.507 "w_mbytes_per_sec": 0 00:12:12.507 }, 00:12:12.507 "claimed": true, 00:12:12.507 "claim_type": "exclusive_write", 00:12:12.507 "zoned": false, 00:12:12.507 "supported_io_types": { 00:12:12.507 "read": true, 00:12:12.507 "write": true, 00:12:12.507 "unmap": true, 00:12:12.507 "flush": true, 00:12:12.507 "reset": true, 00:12:12.507 "nvme_admin": false, 00:12:12.507 "nvme_io": false, 00:12:12.507 "nvme_io_md": false, 00:12:12.507 "write_zeroes": true, 00:12:12.507 "zcopy": true, 00:12:12.507 "get_zone_info": false, 00:12:12.507 "zone_management": false, 00:12:12.507 "zone_append": false, 00:12:12.507 "compare": false, 00:12:12.507 "compare_and_write": false, 00:12:12.507 "abort": true, 00:12:12.507 "seek_hole": false, 00:12:12.507 "seek_data": false, 00:12:12.507 "copy": true, 00:12:12.507 "nvme_iov_md": false 00:12:12.507 }, 00:12:12.507 "memory_domains": [ 00:12:12.507 { 00:12:12.507 "dma_device_id": "system", 00:12:12.507 "dma_device_type": 1 00:12:12.507 }, 00:12:12.507 { 00:12:12.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.507 "dma_device_type": 2 00:12:12.507 } 00:12:12.507 ], 00:12:12.507 "driver_specific": {} 00:12:12.507 } 00:12:12.507 ] 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.507 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.765 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.765 "name": "Existed_Raid", 00:12:12.765 "uuid": "51754c3c-407f-4188-9fc1-220c9c53a698", 00:12:12.765 "strip_size_kb": 64, 00:12:12.765 "state": "online", 00:12:12.765 "raid_level": "concat", 00:12:12.765 "superblock": true, 00:12:12.765 "num_base_bdevs": 3, 00:12:12.765 "num_base_bdevs_discovered": 3, 00:12:12.765 "num_base_bdevs_operational": 3, 00:12:12.765 "base_bdevs_list": [ 00:12:12.765 { 00:12:12.765 "name": "BaseBdev1", 00:12:12.765 "uuid": "65b14227-d436-4042-b1f3-adc0fe9f0fe3", 00:12:12.765 "is_configured": true, 00:12:12.765 "data_offset": 2048, 00:12:12.765 "data_size": 63488 00:12:12.765 }, 00:12:12.765 { 00:12:12.765 "name": "BaseBdev2", 00:12:12.765 "uuid": "8e48f802-d674-4c24-83d4-4358b68fc273", 00:12:12.765 "is_configured": true, 00:12:12.765 "data_offset": 2048, 00:12:12.765 "data_size": 63488 00:12:12.765 }, 00:12:12.765 { 00:12:12.765 "name": "BaseBdev3", 00:12:12.765 "uuid": "1697ad9e-f8f3-4c5e-a408-5afee05b7e26", 00:12:12.765 "is_configured": true, 00:12:12.765 "data_offset": 2048, 00:12:12.765 "data_size": 63488 00:12:12.765 } 00:12:12.765 ] 00:12:12.765 }' 00:12:12.765 11:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.765 11:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.699 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:13.699 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:13.699 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:13.699 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:13.699 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:13.699 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:13.699 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:13.699 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:13.699 [2024-07-25 11:23:29.573088] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.957 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:13.957 "name": "Existed_Raid", 00:12:13.957 "aliases": [ 00:12:13.957 "51754c3c-407f-4188-9fc1-220c9c53a698" 00:12:13.957 ], 00:12:13.957 "product_name": "Raid Volume", 00:12:13.957 "block_size": 512, 00:12:13.957 "num_blocks": 190464, 00:12:13.957 "uuid": "51754c3c-407f-4188-9fc1-220c9c53a698", 00:12:13.957 "assigned_rate_limits": { 00:12:13.957 "rw_ios_per_sec": 0, 00:12:13.957 "rw_mbytes_per_sec": 0, 00:12:13.957 "r_mbytes_per_sec": 0, 00:12:13.957 "w_mbytes_per_sec": 0 00:12:13.957 }, 00:12:13.957 "claimed": false, 00:12:13.957 "zoned": false, 00:12:13.957 "supported_io_types": { 00:12:13.957 "read": true, 00:12:13.957 "write": true, 00:12:13.957 "unmap": true, 00:12:13.957 "flush": true, 00:12:13.957 "reset": true, 00:12:13.957 "nvme_admin": false, 00:12:13.957 "nvme_io": false, 00:12:13.957 "nvme_io_md": false, 00:12:13.957 "write_zeroes": true, 00:12:13.957 "zcopy": false, 00:12:13.957 "get_zone_info": false, 00:12:13.957 "zone_management": false, 00:12:13.957 "zone_append": false, 00:12:13.957 "compare": false, 00:12:13.957 "compare_and_write": false, 00:12:13.957 "abort": false, 00:12:13.957 "seek_hole": false, 00:12:13.957 "seek_data": false, 00:12:13.957 "copy": false, 00:12:13.957 "nvme_iov_md": false 00:12:13.957 }, 00:12:13.957 "memory_domains": [ 00:12:13.957 { 00:12:13.957 "dma_device_id": "system", 00:12:13.957 "dma_device_type": 1 00:12:13.957 }, 00:12:13.957 { 00:12:13.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.957 "dma_device_type": 2 00:12:13.957 }, 00:12:13.957 { 00:12:13.957 "dma_device_id": "system", 00:12:13.957 "dma_device_type": 1 00:12:13.957 }, 00:12:13.957 { 00:12:13.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.957 "dma_device_type": 2 00:12:13.957 }, 00:12:13.957 { 00:12:13.957 "dma_device_id": "system", 00:12:13.957 "dma_device_type": 1 00:12:13.957 }, 00:12:13.957 { 00:12:13.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.957 "dma_device_type": 2 00:12:13.957 } 00:12:13.957 ], 00:12:13.957 "driver_specific": { 00:12:13.957 "raid": { 00:12:13.957 "uuid": "51754c3c-407f-4188-9fc1-220c9c53a698", 00:12:13.957 "strip_size_kb": 64, 00:12:13.957 "state": "online", 00:12:13.957 "raid_level": "concat", 00:12:13.957 "superblock": true, 00:12:13.957 "num_base_bdevs": 3, 00:12:13.957 "num_base_bdevs_discovered": 3, 00:12:13.957 "num_base_bdevs_operational": 3, 00:12:13.957 "base_bdevs_list": [ 00:12:13.957 { 00:12:13.957 "name": "BaseBdev1", 00:12:13.957 "uuid": "65b14227-d436-4042-b1f3-adc0fe9f0fe3", 00:12:13.957 "is_configured": true, 00:12:13.957 "data_offset": 2048, 00:12:13.957 "data_size": 63488 00:12:13.957 }, 00:12:13.957 { 00:12:13.957 "name": "BaseBdev2", 00:12:13.957 "uuid": "8e48f802-d674-4c24-83d4-4358b68fc273", 00:12:13.957 "is_configured": true, 00:12:13.957 "data_offset": 2048, 00:12:13.957 "data_size": 63488 00:12:13.957 }, 00:12:13.957 { 00:12:13.957 "name": "BaseBdev3", 00:12:13.957 "uuid": "1697ad9e-f8f3-4c5e-a408-5afee05b7e26", 00:12:13.957 "is_configured": true, 00:12:13.957 "data_offset": 2048, 00:12:13.957 "data_size": 63488 00:12:13.957 } 00:12:13.958 ] 00:12:13.958 } 00:12:13.958 } 00:12:13.958 }' 00:12:13.958 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:13.958 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:13.958 BaseBdev2 00:12:13.958 BaseBdev3' 00:12:13.958 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:13.958 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:13.958 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:14.216 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.216 "name": "BaseBdev1", 00:12:14.216 "aliases": [ 00:12:14.216 "65b14227-d436-4042-b1f3-adc0fe9f0fe3" 00:12:14.216 ], 00:12:14.216 "product_name": "Malloc disk", 00:12:14.216 "block_size": 512, 00:12:14.216 "num_blocks": 65536, 00:12:14.216 "uuid": "65b14227-d436-4042-b1f3-adc0fe9f0fe3", 00:12:14.216 "assigned_rate_limits": { 00:12:14.216 "rw_ios_per_sec": 0, 00:12:14.216 "rw_mbytes_per_sec": 0, 00:12:14.216 "r_mbytes_per_sec": 0, 00:12:14.216 "w_mbytes_per_sec": 0 00:12:14.216 }, 00:12:14.216 "claimed": true, 00:12:14.216 "claim_type": "exclusive_write", 00:12:14.216 "zoned": false, 00:12:14.216 "supported_io_types": { 00:12:14.216 "read": true, 00:12:14.216 "write": true, 00:12:14.216 "unmap": true, 00:12:14.217 "flush": true, 00:12:14.217 "reset": true, 00:12:14.217 "nvme_admin": false, 00:12:14.217 "nvme_io": false, 00:12:14.217 "nvme_io_md": false, 00:12:14.217 "write_zeroes": true, 00:12:14.217 "zcopy": true, 00:12:14.217 "get_zone_info": false, 00:12:14.217 "zone_management": false, 00:12:14.217 "zone_append": false, 00:12:14.217 "compare": false, 00:12:14.217 "compare_and_write": false, 00:12:14.217 "abort": true, 00:12:14.217 "seek_hole": false, 00:12:14.217 "seek_data": false, 00:12:14.217 "copy": true, 00:12:14.217 "nvme_iov_md": false 00:12:14.217 }, 00:12:14.217 "memory_domains": [ 00:12:14.217 { 00:12:14.217 "dma_device_id": "system", 00:12:14.217 "dma_device_type": 1 00:12:14.217 }, 00:12:14.217 { 00:12:14.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.217 "dma_device_type": 2 00:12:14.217 } 00:12:14.217 ], 00:12:14.217 "driver_specific": {} 00:12:14.217 }' 00:12:14.217 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.217 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.217 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.217 11:23:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.217 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.217 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.217 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.476 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.476 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:14.476 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.476 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:14.476 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:14.476 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:14.476 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:14.476 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:14.734 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:14.734 "name": "BaseBdev2", 00:12:14.734 "aliases": [ 00:12:14.734 "8e48f802-d674-4c24-83d4-4358b68fc273" 00:12:14.734 ], 00:12:14.734 "product_name": "Malloc disk", 00:12:14.734 "block_size": 512, 00:12:14.734 "num_blocks": 65536, 00:12:14.734 "uuid": "8e48f802-d674-4c24-83d4-4358b68fc273", 00:12:14.734 "assigned_rate_limits": { 00:12:14.734 "rw_ios_per_sec": 0, 00:12:14.734 "rw_mbytes_per_sec": 0, 00:12:14.734 "r_mbytes_per_sec": 0, 00:12:14.734 "w_mbytes_per_sec": 0 00:12:14.734 }, 00:12:14.734 "claimed": true, 00:12:14.734 "claim_type": "exclusive_write", 00:12:14.734 "zoned": false, 00:12:14.734 "supported_io_types": { 00:12:14.734 "read": true, 00:12:14.734 "write": true, 00:12:14.734 "unmap": true, 00:12:14.734 "flush": true, 00:12:14.734 "reset": true, 00:12:14.734 "nvme_admin": false, 00:12:14.734 "nvme_io": false, 00:12:14.734 "nvme_io_md": false, 00:12:14.734 "write_zeroes": true, 00:12:14.734 "zcopy": true, 00:12:14.734 "get_zone_info": false, 00:12:14.734 "zone_management": false, 00:12:14.734 "zone_append": false, 00:12:14.734 "compare": false, 00:12:14.734 "compare_and_write": false, 00:12:14.734 "abort": true, 00:12:14.734 "seek_hole": false, 00:12:14.734 "seek_data": false, 00:12:14.734 "copy": true, 00:12:14.734 "nvme_iov_md": false 00:12:14.734 }, 00:12:14.734 "memory_domains": [ 00:12:14.734 { 00:12:14.734 "dma_device_id": "system", 00:12:14.734 "dma_device_type": 1 00:12:14.734 }, 00:12:14.734 { 00:12:14.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.734 "dma_device_type": 2 00:12:14.734 } 00:12:14.734 ], 00:12:14.734 "driver_specific": {} 00:12:14.734 }' 00:12:14.734 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.734 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:14.993 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:14.993 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.993 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:14.993 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:14.993 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.993 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:14.993 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:14.993 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.251 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.251 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.251 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:15.251 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:15.251 11:23:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:15.509 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:15.509 "name": "BaseBdev3", 00:12:15.509 "aliases": [ 00:12:15.509 "1697ad9e-f8f3-4c5e-a408-5afee05b7e26" 00:12:15.509 ], 00:12:15.509 "product_name": "Malloc disk", 00:12:15.509 "block_size": 512, 00:12:15.509 "num_blocks": 65536, 00:12:15.509 "uuid": "1697ad9e-f8f3-4c5e-a408-5afee05b7e26", 00:12:15.509 "assigned_rate_limits": { 00:12:15.509 "rw_ios_per_sec": 0, 00:12:15.509 "rw_mbytes_per_sec": 0, 00:12:15.509 "r_mbytes_per_sec": 0, 00:12:15.509 "w_mbytes_per_sec": 0 00:12:15.509 }, 00:12:15.509 "claimed": true, 00:12:15.509 "claim_type": "exclusive_write", 00:12:15.509 "zoned": false, 00:12:15.509 "supported_io_types": { 00:12:15.509 "read": true, 00:12:15.509 "write": true, 00:12:15.509 "unmap": true, 00:12:15.509 "flush": true, 00:12:15.509 "reset": true, 00:12:15.509 "nvme_admin": false, 00:12:15.509 "nvme_io": false, 00:12:15.509 "nvme_io_md": false, 00:12:15.509 "write_zeroes": true, 00:12:15.509 "zcopy": true, 00:12:15.509 "get_zone_info": false, 00:12:15.509 "zone_management": false, 00:12:15.509 "zone_append": false, 00:12:15.509 "compare": false, 00:12:15.509 "compare_and_write": false, 00:12:15.509 "abort": true, 00:12:15.509 "seek_hole": false, 00:12:15.509 "seek_data": false, 00:12:15.509 "copy": true, 00:12:15.509 "nvme_iov_md": false 00:12:15.509 }, 00:12:15.509 "memory_domains": [ 00:12:15.509 { 00:12:15.509 "dma_device_id": "system", 00:12:15.509 "dma_device_type": 1 00:12:15.509 }, 00:12:15.509 { 00:12:15.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.509 "dma_device_type": 2 00:12:15.509 } 00:12:15.509 ], 00:12:15.509 "driver_specific": {} 00:12:15.509 }' 00:12:15.509 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.509 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.509 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:15.509 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.509 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.767 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:15.767 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.767 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.767 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:15.767 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.767 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.767 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.767 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:16.026 [2024-07-25 11:23:31.800851] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.026 [2024-07-25 11:23:31.800899] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.026 [2024-07-25 11:23:31.800970] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.284 11:23:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.284 11:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:16.284 "name": "Existed_Raid", 00:12:16.284 "uuid": "51754c3c-407f-4188-9fc1-220c9c53a698", 00:12:16.284 "strip_size_kb": 64, 00:12:16.284 "state": "offline", 00:12:16.284 "raid_level": "concat", 00:12:16.284 "superblock": true, 00:12:16.284 "num_base_bdevs": 3, 00:12:16.284 "num_base_bdevs_discovered": 2, 00:12:16.284 "num_base_bdevs_operational": 2, 00:12:16.284 "base_bdevs_list": [ 00:12:16.284 { 00:12:16.284 "name": null, 00:12:16.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.284 "is_configured": false, 00:12:16.284 "data_offset": 2048, 00:12:16.284 "data_size": 63488 00:12:16.284 }, 00:12:16.284 { 00:12:16.284 "name": "BaseBdev2", 00:12:16.284 "uuid": "8e48f802-d674-4c24-83d4-4358b68fc273", 00:12:16.284 "is_configured": true, 00:12:16.284 "data_offset": 2048, 00:12:16.284 "data_size": 63488 00:12:16.284 }, 00:12:16.284 { 00:12:16.284 "name": "BaseBdev3", 00:12:16.284 "uuid": "1697ad9e-f8f3-4c5e-a408-5afee05b7e26", 00:12:16.284 "is_configured": true, 00:12:16.284 "data_offset": 2048, 00:12:16.284 "data_size": 63488 00:12:16.284 } 00:12:16.284 ] 00:12:16.284 }' 00:12:16.284 11:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:16.284 11:23:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.218 11:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:17.218 11:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:17.218 11:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.218 11:23:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:17.218 11:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:17.218 11:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.218 11:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:17.476 [2024-07-25 11:23:33.355813] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:17.734 11:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:17.734 11:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:17.734 11:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.734 11:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:17.992 11:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:17.992 11:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.992 11:23:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:18.250 [2024-07-25 11:23:34.014104] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.250 [2024-07-25 11:23:34.014186] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:18.250 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:18.250 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.250 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.250 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.817 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:18.817 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:18.817 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:18.817 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:18.817 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:18.817 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:19.075 BaseBdev2 00:12:19.075 11:23:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:19.076 11:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:19.076 11:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:19.076 11:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:19.076 11:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:19.076 11:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:19.076 11:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:19.334 11:23:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:19.593 [ 00:12:19.593 { 00:12:19.593 "name": "BaseBdev2", 00:12:19.593 "aliases": [ 00:12:19.593 "bb982e38-3bcc-4ecf-8557-374cc0471deb" 00:12:19.593 ], 00:12:19.593 "product_name": "Malloc disk", 00:12:19.593 "block_size": 512, 00:12:19.593 "num_blocks": 65536, 00:12:19.593 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:19.593 "assigned_rate_limits": { 00:12:19.593 "rw_ios_per_sec": 0, 00:12:19.593 "rw_mbytes_per_sec": 0, 00:12:19.593 "r_mbytes_per_sec": 0, 00:12:19.593 "w_mbytes_per_sec": 0 00:12:19.593 }, 00:12:19.593 "claimed": false, 00:12:19.593 "zoned": false, 00:12:19.593 "supported_io_types": { 00:12:19.593 "read": true, 00:12:19.593 "write": true, 00:12:19.593 "unmap": true, 00:12:19.593 "flush": true, 00:12:19.593 "reset": true, 00:12:19.593 "nvme_admin": false, 00:12:19.593 "nvme_io": false, 00:12:19.593 "nvme_io_md": false, 00:12:19.593 "write_zeroes": true, 00:12:19.593 "zcopy": true, 00:12:19.593 "get_zone_info": false, 00:12:19.593 "zone_management": false, 00:12:19.593 "zone_append": false, 00:12:19.593 "compare": false, 00:12:19.593 "compare_and_write": false, 00:12:19.593 "abort": true, 00:12:19.593 "seek_hole": false, 00:12:19.593 "seek_data": false, 00:12:19.593 "copy": true, 00:12:19.593 "nvme_iov_md": false 00:12:19.593 }, 00:12:19.593 "memory_domains": [ 00:12:19.593 { 00:12:19.593 "dma_device_id": "system", 00:12:19.593 "dma_device_type": 1 00:12:19.593 }, 00:12:19.593 { 00:12:19.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.593 "dma_device_type": 2 00:12:19.593 } 00:12:19.593 ], 00:12:19.593 "driver_specific": {} 00:12:19.593 } 00:12:19.593 ] 00:12:19.593 11:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:19.593 11:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:19.593 11:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:19.593 11:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:19.851 BaseBdev3 00:12:19.851 11:23:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:19.851 11:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:19.851 11:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:19.851 11:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:19.851 11:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:19.851 11:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:19.851 11:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:20.109 11:23:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:20.367 [ 00:12:20.367 { 00:12:20.367 "name": "BaseBdev3", 00:12:20.367 "aliases": [ 00:12:20.367 "65826adb-627c-4858-80f6-6a3f1ab7e87d" 00:12:20.367 ], 00:12:20.367 "product_name": "Malloc disk", 00:12:20.367 "block_size": 512, 00:12:20.367 "num_blocks": 65536, 00:12:20.367 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:20.367 "assigned_rate_limits": { 00:12:20.367 "rw_ios_per_sec": 0, 00:12:20.367 "rw_mbytes_per_sec": 0, 00:12:20.367 "r_mbytes_per_sec": 0, 00:12:20.367 "w_mbytes_per_sec": 0 00:12:20.367 }, 00:12:20.367 "claimed": false, 00:12:20.367 "zoned": false, 00:12:20.367 "supported_io_types": { 00:12:20.367 "read": true, 00:12:20.367 "write": true, 00:12:20.367 "unmap": true, 00:12:20.367 "flush": true, 00:12:20.367 "reset": true, 00:12:20.367 "nvme_admin": false, 00:12:20.367 "nvme_io": false, 00:12:20.367 "nvme_io_md": false, 00:12:20.367 "write_zeroes": true, 00:12:20.367 "zcopy": true, 00:12:20.367 "get_zone_info": false, 00:12:20.367 "zone_management": false, 00:12:20.367 "zone_append": false, 00:12:20.367 "compare": false, 00:12:20.367 "compare_and_write": false, 00:12:20.367 "abort": true, 00:12:20.367 "seek_hole": false, 00:12:20.367 "seek_data": false, 00:12:20.367 "copy": true, 00:12:20.367 "nvme_iov_md": false 00:12:20.367 }, 00:12:20.367 "memory_domains": [ 00:12:20.367 { 00:12:20.367 "dma_device_id": "system", 00:12:20.367 "dma_device_type": 1 00:12:20.367 }, 00:12:20.367 { 00:12:20.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.367 "dma_device_type": 2 00:12:20.367 } 00:12:20.367 ], 00:12:20.367 "driver_specific": {} 00:12:20.367 } 00:12:20.367 ] 00:12:20.367 11:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:20.367 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:20.367 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:20.368 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:20.625 [2024-07-25 11:23:36.338504] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.625 [2024-07-25 11:23:36.338608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.625 [2024-07-25 11:23:36.338695] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.625 [2024-07-25 11:23:36.341240] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.625 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:20.625 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:20.625 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:20.625 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:20.625 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:20.625 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:20.625 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:20.625 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:20.625 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:20.625 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:20.626 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.626 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.883 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:20.883 "name": "Existed_Raid", 00:12:20.883 "uuid": "44ab1d7c-305a-4d4a-bc53-6009077c3337", 00:12:20.883 "strip_size_kb": 64, 00:12:20.883 "state": "configuring", 00:12:20.883 "raid_level": "concat", 00:12:20.883 "superblock": true, 00:12:20.883 "num_base_bdevs": 3, 00:12:20.883 "num_base_bdevs_discovered": 2, 00:12:20.883 "num_base_bdevs_operational": 3, 00:12:20.883 "base_bdevs_list": [ 00:12:20.883 { 00:12:20.883 "name": "BaseBdev1", 00:12:20.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.884 "is_configured": false, 00:12:20.884 "data_offset": 0, 00:12:20.884 "data_size": 0 00:12:20.884 }, 00:12:20.884 { 00:12:20.884 "name": "BaseBdev2", 00:12:20.884 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:20.884 "is_configured": true, 00:12:20.884 "data_offset": 2048, 00:12:20.884 "data_size": 63488 00:12:20.884 }, 00:12:20.884 { 00:12:20.884 "name": "BaseBdev3", 00:12:20.884 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:20.884 "is_configured": true, 00:12:20.884 "data_offset": 2048, 00:12:20.884 "data_size": 63488 00:12:20.884 } 00:12:20.884 ] 00:12:20.884 }' 00:12:20.884 11:23:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:20.884 11:23:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:21.817 [2024-07-25 11:23:37.578868] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.817 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.818 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.818 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.076 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:22.076 "name": "Existed_Raid", 00:12:22.076 "uuid": "44ab1d7c-305a-4d4a-bc53-6009077c3337", 00:12:22.076 "strip_size_kb": 64, 00:12:22.076 "state": "configuring", 00:12:22.076 "raid_level": "concat", 00:12:22.076 "superblock": true, 00:12:22.076 "num_base_bdevs": 3, 00:12:22.076 "num_base_bdevs_discovered": 1, 00:12:22.076 "num_base_bdevs_operational": 3, 00:12:22.076 "base_bdevs_list": [ 00:12:22.076 { 00:12:22.076 "name": "BaseBdev1", 00:12:22.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.076 "is_configured": false, 00:12:22.076 "data_offset": 0, 00:12:22.076 "data_size": 0 00:12:22.076 }, 00:12:22.076 { 00:12:22.076 "name": null, 00:12:22.076 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:22.076 "is_configured": false, 00:12:22.076 "data_offset": 2048, 00:12:22.076 "data_size": 63488 00:12:22.076 }, 00:12:22.076 { 00:12:22.076 "name": "BaseBdev3", 00:12:22.076 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:22.076 "is_configured": true, 00:12:22.076 "data_offset": 2048, 00:12:22.076 "data_size": 63488 00:12:22.076 } 00:12:22.076 ] 00:12:22.076 }' 00:12:22.076 11:23:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:22.076 11:23:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.011 11:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.012 11:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:23.012 11:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:23.012 11:23:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.578 [2024-07-25 11:23:39.186343] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.578 BaseBdev1 00:12:23.579 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:23.579 11:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:23.579 11:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:23.579 11:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:23.579 11:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:23.579 11:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:23.579 11:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:23.836 11:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.836 [ 00:12:23.836 { 00:12:23.836 "name": "BaseBdev1", 00:12:23.836 "aliases": [ 00:12:23.836 "a8663831-7e61-4fba-8cbc-82d3a0a2287e" 00:12:23.836 ], 00:12:23.836 "product_name": "Malloc disk", 00:12:23.836 "block_size": 512, 00:12:23.836 "num_blocks": 65536, 00:12:23.836 "uuid": "a8663831-7e61-4fba-8cbc-82d3a0a2287e", 00:12:23.836 "assigned_rate_limits": { 00:12:23.836 "rw_ios_per_sec": 0, 00:12:23.836 "rw_mbytes_per_sec": 0, 00:12:23.836 "r_mbytes_per_sec": 0, 00:12:23.836 "w_mbytes_per_sec": 0 00:12:23.836 }, 00:12:23.836 "claimed": true, 00:12:23.836 "claim_type": "exclusive_write", 00:12:23.836 "zoned": false, 00:12:23.836 "supported_io_types": { 00:12:23.836 "read": true, 00:12:23.836 "write": true, 00:12:23.836 "unmap": true, 00:12:23.836 "flush": true, 00:12:23.836 "reset": true, 00:12:23.836 "nvme_admin": false, 00:12:23.836 "nvme_io": false, 00:12:23.837 "nvme_io_md": false, 00:12:23.837 "write_zeroes": true, 00:12:23.837 "zcopy": true, 00:12:23.837 "get_zone_info": false, 00:12:23.837 "zone_management": false, 00:12:23.837 "zone_append": false, 00:12:23.837 "compare": false, 00:12:23.837 "compare_and_write": false, 00:12:23.837 "abort": true, 00:12:23.837 "seek_hole": false, 00:12:23.837 "seek_data": false, 00:12:23.837 "copy": true, 00:12:23.837 "nvme_iov_md": false 00:12:23.837 }, 00:12:23.837 "memory_domains": [ 00:12:23.837 { 00:12:23.837 "dma_device_id": "system", 00:12:23.837 "dma_device_type": 1 00:12:23.837 }, 00:12:23.837 { 00:12:23.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.837 "dma_device_type": 2 00:12:23.837 } 00:12:23.837 ], 00:12:23.837 "driver_specific": {} 00:12:23.837 } 00:12:23.837 ] 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.095 11:23:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.353 11:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.353 "name": "Existed_Raid", 00:12:24.353 "uuid": "44ab1d7c-305a-4d4a-bc53-6009077c3337", 00:12:24.353 "strip_size_kb": 64, 00:12:24.353 "state": "configuring", 00:12:24.353 "raid_level": "concat", 00:12:24.353 "superblock": true, 00:12:24.353 "num_base_bdevs": 3, 00:12:24.353 "num_base_bdevs_discovered": 2, 00:12:24.353 "num_base_bdevs_operational": 3, 00:12:24.353 "base_bdevs_list": [ 00:12:24.353 { 00:12:24.353 "name": "BaseBdev1", 00:12:24.353 "uuid": "a8663831-7e61-4fba-8cbc-82d3a0a2287e", 00:12:24.353 "is_configured": true, 00:12:24.353 "data_offset": 2048, 00:12:24.353 "data_size": 63488 00:12:24.353 }, 00:12:24.353 { 00:12:24.353 "name": null, 00:12:24.353 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:24.353 "is_configured": false, 00:12:24.353 "data_offset": 2048, 00:12:24.354 "data_size": 63488 00:12:24.354 }, 00:12:24.354 { 00:12:24.354 "name": "BaseBdev3", 00:12:24.354 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:24.354 "is_configured": true, 00:12:24.354 "data_offset": 2048, 00:12:24.354 "data_size": 63488 00:12:24.354 } 00:12:24.354 ] 00:12:24.354 }' 00:12:24.354 11:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.354 11:23:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.921 11:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.921 11:23:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:25.488 [2024-07-25 11:23:41.335214] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.488 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.056 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.056 "name": "Existed_Raid", 00:12:26.056 "uuid": "44ab1d7c-305a-4d4a-bc53-6009077c3337", 00:12:26.056 "strip_size_kb": 64, 00:12:26.056 "state": "configuring", 00:12:26.056 "raid_level": "concat", 00:12:26.056 "superblock": true, 00:12:26.056 "num_base_bdevs": 3, 00:12:26.056 "num_base_bdevs_discovered": 1, 00:12:26.056 "num_base_bdevs_operational": 3, 00:12:26.056 "base_bdevs_list": [ 00:12:26.056 { 00:12:26.056 "name": "BaseBdev1", 00:12:26.056 "uuid": "a8663831-7e61-4fba-8cbc-82d3a0a2287e", 00:12:26.056 "is_configured": true, 00:12:26.056 "data_offset": 2048, 00:12:26.056 "data_size": 63488 00:12:26.056 }, 00:12:26.056 { 00:12:26.056 "name": null, 00:12:26.056 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:26.056 "is_configured": false, 00:12:26.056 "data_offset": 2048, 00:12:26.056 "data_size": 63488 00:12:26.056 }, 00:12:26.056 { 00:12:26.056 "name": null, 00:12:26.056 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:26.056 "is_configured": false, 00:12:26.056 "data_offset": 2048, 00:12:26.056 "data_size": 63488 00:12:26.056 } 00:12:26.056 ] 00:12:26.056 }' 00:12:26.056 11:23:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.056 11:23:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.623 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.623 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:26.882 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:26.882 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:27.141 [2024-07-25 11:23:42.919728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.141 11:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.400 11:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.400 "name": "Existed_Raid", 00:12:27.400 "uuid": "44ab1d7c-305a-4d4a-bc53-6009077c3337", 00:12:27.400 "strip_size_kb": 64, 00:12:27.400 "state": "configuring", 00:12:27.400 "raid_level": "concat", 00:12:27.400 "superblock": true, 00:12:27.400 "num_base_bdevs": 3, 00:12:27.400 "num_base_bdevs_discovered": 2, 00:12:27.400 "num_base_bdevs_operational": 3, 00:12:27.400 "base_bdevs_list": [ 00:12:27.400 { 00:12:27.400 "name": "BaseBdev1", 00:12:27.400 "uuid": "a8663831-7e61-4fba-8cbc-82d3a0a2287e", 00:12:27.400 "is_configured": true, 00:12:27.400 "data_offset": 2048, 00:12:27.400 "data_size": 63488 00:12:27.400 }, 00:12:27.400 { 00:12:27.400 "name": null, 00:12:27.400 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:27.400 "is_configured": false, 00:12:27.400 "data_offset": 2048, 00:12:27.400 "data_size": 63488 00:12:27.400 }, 00:12:27.400 { 00:12:27.400 "name": "BaseBdev3", 00:12:27.400 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:27.400 "is_configured": true, 00:12:27.400 "data_offset": 2048, 00:12:27.400 "data_size": 63488 00:12:27.400 } 00:12:27.400 ] 00:12:27.400 }' 00:12:27.400 11:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.400 11:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.333 11:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.334 11:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:28.592 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:28.592 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:28.850 [2024-07-25 11:23:44.540274] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.850 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.109 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:29.109 "name": "Existed_Raid", 00:12:29.109 "uuid": "44ab1d7c-305a-4d4a-bc53-6009077c3337", 00:12:29.109 "strip_size_kb": 64, 00:12:29.109 "state": "configuring", 00:12:29.109 "raid_level": "concat", 00:12:29.109 "superblock": true, 00:12:29.109 "num_base_bdevs": 3, 00:12:29.109 "num_base_bdevs_discovered": 1, 00:12:29.109 "num_base_bdevs_operational": 3, 00:12:29.109 "base_bdevs_list": [ 00:12:29.109 { 00:12:29.109 "name": null, 00:12:29.109 "uuid": "a8663831-7e61-4fba-8cbc-82d3a0a2287e", 00:12:29.109 "is_configured": false, 00:12:29.109 "data_offset": 2048, 00:12:29.109 "data_size": 63488 00:12:29.109 }, 00:12:29.109 { 00:12:29.109 "name": null, 00:12:29.109 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:29.109 "is_configured": false, 00:12:29.109 "data_offset": 2048, 00:12:29.109 "data_size": 63488 00:12:29.109 }, 00:12:29.109 { 00:12:29.109 "name": "BaseBdev3", 00:12:29.109 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:29.109 "is_configured": true, 00:12:29.109 "data_offset": 2048, 00:12:29.109 "data_size": 63488 00:12:29.109 } 00:12:29.109 ] 00:12:29.109 }' 00:12:29.109 11:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:29.109 11:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.065 11:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.065 11:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:30.065 11:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:30.065 11:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:30.631 [2024-07-25 11:23:46.213066] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:30.631 "name": "Existed_Raid", 00:12:30.631 "uuid": "44ab1d7c-305a-4d4a-bc53-6009077c3337", 00:12:30.631 "strip_size_kb": 64, 00:12:30.631 "state": "configuring", 00:12:30.631 "raid_level": "concat", 00:12:30.631 "superblock": true, 00:12:30.631 "num_base_bdevs": 3, 00:12:30.631 "num_base_bdevs_discovered": 2, 00:12:30.631 "num_base_bdevs_operational": 3, 00:12:30.631 "base_bdevs_list": [ 00:12:30.631 { 00:12:30.631 "name": null, 00:12:30.631 "uuid": "a8663831-7e61-4fba-8cbc-82d3a0a2287e", 00:12:30.631 "is_configured": false, 00:12:30.631 "data_offset": 2048, 00:12:30.631 "data_size": 63488 00:12:30.631 }, 00:12:30.631 { 00:12:30.631 "name": "BaseBdev2", 00:12:30.631 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:30.631 "is_configured": true, 00:12:30.631 "data_offset": 2048, 00:12:30.631 "data_size": 63488 00:12:30.631 }, 00:12:30.631 { 00:12:30.631 "name": "BaseBdev3", 00:12:30.631 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:30.631 "is_configured": true, 00:12:30.631 "data_offset": 2048, 00:12:30.631 "data_size": 63488 00:12:30.631 } 00:12:30.631 ] 00:12:30.631 }' 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:30.631 11:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.565 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.565 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:31.822 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:31.822 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.822 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:32.080 11:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a8663831-7e61-4fba-8cbc-82d3a0a2287e 00:12:32.338 [2024-07-25 11:23:48.089615] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:32.338 [2024-07-25 11:23:48.089923] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:32.338 [2024-07-25 11:23:48.089941] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:32.338 [2024-07-25 11:23:48.090261] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:32.338 [2024-07-25 11:23:48.090438] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:32.338 [2024-07-25 11:23:48.090460] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:32.338 [2024-07-25 11:23:48.090649] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.338 NewBaseBdev 00:12:32.338 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:32.338 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:32.338 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:32.338 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:32.338 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:32.338 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:32.338 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:32.596 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:32.855 [ 00:12:32.855 { 00:12:32.855 "name": "NewBaseBdev", 00:12:32.855 "aliases": [ 00:12:32.855 "a8663831-7e61-4fba-8cbc-82d3a0a2287e" 00:12:32.855 ], 00:12:32.855 "product_name": "Malloc disk", 00:12:32.855 "block_size": 512, 00:12:32.855 "num_blocks": 65536, 00:12:32.855 "uuid": "a8663831-7e61-4fba-8cbc-82d3a0a2287e", 00:12:32.855 "assigned_rate_limits": { 00:12:32.855 "rw_ios_per_sec": 0, 00:12:32.855 "rw_mbytes_per_sec": 0, 00:12:32.855 "r_mbytes_per_sec": 0, 00:12:32.855 "w_mbytes_per_sec": 0 00:12:32.855 }, 00:12:32.855 "claimed": true, 00:12:32.855 "claim_type": "exclusive_write", 00:12:32.855 "zoned": false, 00:12:32.855 "supported_io_types": { 00:12:32.855 "read": true, 00:12:32.855 "write": true, 00:12:32.855 "unmap": true, 00:12:32.855 "flush": true, 00:12:32.855 "reset": true, 00:12:32.855 "nvme_admin": false, 00:12:32.855 "nvme_io": false, 00:12:32.855 "nvme_io_md": false, 00:12:32.855 "write_zeroes": true, 00:12:32.855 "zcopy": true, 00:12:32.855 "get_zone_info": false, 00:12:32.855 "zone_management": false, 00:12:32.855 "zone_append": false, 00:12:32.855 "compare": false, 00:12:32.855 "compare_and_write": false, 00:12:32.855 "abort": true, 00:12:32.855 "seek_hole": false, 00:12:32.855 "seek_data": false, 00:12:32.855 "copy": true, 00:12:32.855 "nvme_iov_md": false 00:12:32.855 }, 00:12:32.855 "memory_domains": [ 00:12:32.855 { 00:12:32.855 "dma_device_id": "system", 00:12:32.855 "dma_device_type": 1 00:12:32.855 }, 00:12:32.855 { 00:12:32.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.855 "dma_device_type": 2 00:12:32.855 } 00:12:32.855 ], 00:12:32.855 "driver_specific": {} 00:12:32.855 } 00:12:32.855 ] 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.855 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.114 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:33.114 "name": "Existed_Raid", 00:12:33.114 "uuid": "44ab1d7c-305a-4d4a-bc53-6009077c3337", 00:12:33.114 "strip_size_kb": 64, 00:12:33.114 "state": "online", 00:12:33.114 "raid_level": "concat", 00:12:33.114 "superblock": true, 00:12:33.114 "num_base_bdevs": 3, 00:12:33.114 "num_base_bdevs_discovered": 3, 00:12:33.115 "num_base_bdevs_operational": 3, 00:12:33.115 "base_bdevs_list": [ 00:12:33.115 { 00:12:33.115 "name": "NewBaseBdev", 00:12:33.115 "uuid": "a8663831-7e61-4fba-8cbc-82d3a0a2287e", 00:12:33.115 "is_configured": true, 00:12:33.115 "data_offset": 2048, 00:12:33.115 "data_size": 63488 00:12:33.115 }, 00:12:33.115 { 00:12:33.115 "name": "BaseBdev2", 00:12:33.115 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:33.115 "is_configured": true, 00:12:33.115 "data_offset": 2048, 00:12:33.115 "data_size": 63488 00:12:33.115 }, 00:12:33.115 { 00:12:33.115 "name": "BaseBdev3", 00:12:33.115 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:33.115 "is_configured": true, 00:12:33.115 "data_offset": 2048, 00:12:33.115 "data_size": 63488 00:12:33.115 } 00:12:33.115 ] 00:12:33.115 }' 00:12:33.115 11:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:33.115 11:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.055 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:34.055 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:34.055 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:34.055 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:34.055 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:34.055 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:34.055 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:34.055 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:34.055 [2024-07-25 11:23:49.918585] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.313 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:34.313 "name": "Existed_Raid", 00:12:34.313 "aliases": [ 00:12:34.313 "44ab1d7c-305a-4d4a-bc53-6009077c3337" 00:12:34.313 ], 00:12:34.313 "product_name": "Raid Volume", 00:12:34.313 "block_size": 512, 00:12:34.313 "num_blocks": 190464, 00:12:34.313 "uuid": "44ab1d7c-305a-4d4a-bc53-6009077c3337", 00:12:34.313 "assigned_rate_limits": { 00:12:34.313 "rw_ios_per_sec": 0, 00:12:34.313 "rw_mbytes_per_sec": 0, 00:12:34.313 "r_mbytes_per_sec": 0, 00:12:34.313 "w_mbytes_per_sec": 0 00:12:34.313 }, 00:12:34.313 "claimed": false, 00:12:34.313 "zoned": false, 00:12:34.313 "supported_io_types": { 00:12:34.313 "read": true, 00:12:34.313 "write": true, 00:12:34.313 "unmap": true, 00:12:34.313 "flush": true, 00:12:34.313 "reset": true, 00:12:34.313 "nvme_admin": false, 00:12:34.313 "nvme_io": false, 00:12:34.313 "nvme_io_md": false, 00:12:34.313 "write_zeroes": true, 00:12:34.313 "zcopy": false, 00:12:34.313 "get_zone_info": false, 00:12:34.313 "zone_management": false, 00:12:34.313 "zone_append": false, 00:12:34.313 "compare": false, 00:12:34.313 "compare_and_write": false, 00:12:34.313 "abort": false, 00:12:34.313 "seek_hole": false, 00:12:34.313 "seek_data": false, 00:12:34.313 "copy": false, 00:12:34.313 "nvme_iov_md": false 00:12:34.313 }, 00:12:34.313 "memory_domains": [ 00:12:34.313 { 00:12:34.313 "dma_device_id": "system", 00:12:34.313 "dma_device_type": 1 00:12:34.313 }, 00:12:34.313 { 00:12:34.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.313 "dma_device_type": 2 00:12:34.313 }, 00:12:34.313 { 00:12:34.313 "dma_device_id": "system", 00:12:34.313 "dma_device_type": 1 00:12:34.313 }, 00:12:34.313 { 00:12:34.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.313 "dma_device_type": 2 00:12:34.313 }, 00:12:34.313 { 00:12:34.313 "dma_device_id": "system", 00:12:34.313 "dma_device_type": 1 00:12:34.313 }, 00:12:34.313 { 00:12:34.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.313 "dma_device_type": 2 00:12:34.313 } 00:12:34.313 ], 00:12:34.313 "driver_specific": { 00:12:34.313 "raid": { 00:12:34.313 "uuid": "44ab1d7c-305a-4d4a-bc53-6009077c3337", 00:12:34.313 "strip_size_kb": 64, 00:12:34.313 "state": "online", 00:12:34.313 "raid_level": "concat", 00:12:34.313 "superblock": true, 00:12:34.313 "num_base_bdevs": 3, 00:12:34.313 "num_base_bdevs_discovered": 3, 00:12:34.313 "num_base_bdevs_operational": 3, 00:12:34.313 "base_bdevs_list": [ 00:12:34.313 { 00:12:34.313 "name": "NewBaseBdev", 00:12:34.313 "uuid": "a8663831-7e61-4fba-8cbc-82d3a0a2287e", 00:12:34.313 "is_configured": true, 00:12:34.313 "data_offset": 2048, 00:12:34.313 "data_size": 63488 00:12:34.313 }, 00:12:34.313 { 00:12:34.313 "name": "BaseBdev2", 00:12:34.313 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:34.313 "is_configured": true, 00:12:34.313 "data_offset": 2048, 00:12:34.313 "data_size": 63488 00:12:34.313 }, 00:12:34.313 { 00:12:34.313 "name": "BaseBdev3", 00:12:34.313 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:34.313 "is_configured": true, 00:12:34.313 "data_offset": 2048, 00:12:34.313 "data_size": 63488 00:12:34.313 } 00:12:34.313 ] 00:12:34.313 } 00:12:34.313 } 00:12:34.313 }' 00:12:34.313 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.313 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:34.313 BaseBdev2 00:12:34.313 BaseBdev3' 00:12:34.313 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:34.313 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:34.313 11:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:34.572 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:34.572 "name": "NewBaseBdev", 00:12:34.572 "aliases": [ 00:12:34.572 "a8663831-7e61-4fba-8cbc-82d3a0a2287e" 00:12:34.572 ], 00:12:34.572 "product_name": "Malloc disk", 00:12:34.572 "block_size": 512, 00:12:34.572 "num_blocks": 65536, 00:12:34.572 "uuid": "a8663831-7e61-4fba-8cbc-82d3a0a2287e", 00:12:34.572 "assigned_rate_limits": { 00:12:34.572 "rw_ios_per_sec": 0, 00:12:34.572 "rw_mbytes_per_sec": 0, 00:12:34.572 "r_mbytes_per_sec": 0, 00:12:34.572 "w_mbytes_per_sec": 0 00:12:34.572 }, 00:12:34.572 "claimed": true, 00:12:34.572 "claim_type": "exclusive_write", 00:12:34.572 "zoned": false, 00:12:34.572 "supported_io_types": { 00:12:34.572 "read": true, 00:12:34.572 "write": true, 00:12:34.572 "unmap": true, 00:12:34.572 "flush": true, 00:12:34.572 "reset": true, 00:12:34.572 "nvme_admin": false, 00:12:34.572 "nvme_io": false, 00:12:34.572 "nvme_io_md": false, 00:12:34.572 "write_zeroes": true, 00:12:34.572 "zcopy": true, 00:12:34.572 "get_zone_info": false, 00:12:34.572 "zone_management": false, 00:12:34.572 "zone_append": false, 00:12:34.572 "compare": false, 00:12:34.572 "compare_and_write": false, 00:12:34.572 "abort": true, 00:12:34.572 "seek_hole": false, 00:12:34.572 "seek_data": false, 00:12:34.572 "copy": true, 00:12:34.572 "nvme_iov_md": false 00:12:34.572 }, 00:12:34.572 "memory_domains": [ 00:12:34.572 { 00:12:34.572 "dma_device_id": "system", 00:12:34.572 "dma_device_type": 1 00:12:34.572 }, 00:12:34.572 { 00:12:34.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.572 "dma_device_type": 2 00:12:34.572 } 00:12:34.572 ], 00:12:34.572 "driver_specific": {} 00:12:34.572 }' 00:12:34.572 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:34.572 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:34.572 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:34.572 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:34.572 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:34.830 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:34.830 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:34.830 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:34.830 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:34.830 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:34.830 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:34.830 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:34.830 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:34.830 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:34.830 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:35.088 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:35.088 "name": "BaseBdev2", 00:12:35.088 "aliases": [ 00:12:35.088 "bb982e38-3bcc-4ecf-8557-374cc0471deb" 00:12:35.088 ], 00:12:35.088 "product_name": "Malloc disk", 00:12:35.088 "block_size": 512, 00:12:35.088 "num_blocks": 65536, 00:12:35.088 "uuid": "bb982e38-3bcc-4ecf-8557-374cc0471deb", 00:12:35.088 "assigned_rate_limits": { 00:12:35.088 "rw_ios_per_sec": 0, 00:12:35.088 "rw_mbytes_per_sec": 0, 00:12:35.088 "r_mbytes_per_sec": 0, 00:12:35.088 "w_mbytes_per_sec": 0 00:12:35.088 }, 00:12:35.088 "claimed": true, 00:12:35.088 "claim_type": "exclusive_write", 00:12:35.088 "zoned": false, 00:12:35.088 "supported_io_types": { 00:12:35.088 "read": true, 00:12:35.088 "write": true, 00:12:35.088 "unmap": true, 00:12:35.088 "flush": true, 00:12:35.088 "reset": true, 00:12:35.088 "nvme_admin": false, 00:12:35.088 "nvme_io": false, 00:12:35.088 "nvme_io_md": false, 00:12:35.088 "write_zeroes": true, 00:12:35.088 "zcopy": true, 00:12:35.088 "get_zone_info": false, 00:12:35.088 "zone_management": false, 00:12:35.088 "zone_append": false, 00:12:35.088 "compare": false, 00:12:35.088 "compare_and_write": false, 00:12:35.088 "abort": true, 00:12:35.088 "seek_hole": false, 00:12:35.088 "seek_data": false, 00:12:35.088 "copy": true, 00:12:35.088 "nvme_iov_md": false 00:12:35.088 }, 00:12:35.088 "memory_domains": [ 00:12:35.088 { 00:12:35.088 "dma_device_id": "system", 00:12:35.088 "dma_device_type": 1 00:12:35.088 }, 00:12:35.088 { 00:12:35.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.088 "dma_device_type": 2 00:12:35.088 } 00:12:35.088 ], 00:12:35.088 "driver_specific": {} 00:12:35.088 }' 00:12:35.088 11:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:35.346 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:35.346 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:35.346 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:35.346 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:35.346 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:35.346 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:35.346 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:35.605 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:35.605 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:35.605 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:35.605 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:35.605 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:35.605 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:35.605 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:35.863 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:35.863 "name": "BaseBdev3", 00:12:35.863 "aliases": [ 00:12:35.863 "65826adb-627c-4858-80f6-6a3f1ab7e87d" 00:12:35.863 ], 00:12:35.863 "product_name": "Malloc disk", 00:12:35.863 "block_size": 512, 00:12:35.863 "num_blocks": 65536, 00:12:35.863 "uuid": "65826adb-627c-4858-80f6-6a3f1ab7e87d", 00:12:35.863 "assigned_rate_limits": { 00:12:35.863 "rw_ios_per_sec": 0, 00:12:35.863 "rw_mbytes_per_sec": 0, 00:12:35.863 "r_mbytes_per_sec": 0, 00:12:35.863 "w_mbytes_per_sec": 0 00:12:35.863 }, 00:12:35.863 "claimed": true, 00:12:35.863 "claim_type": "exclusive_write", 00:12:35.863 "zoned": false, 00:12:35.863 "supported_io_types": { 00:12:35.863 "read": true, 00:12:35.863 "write": true, 00:12:35.863 "unmap": true, 00:12:35.863 "flush": true, 00:12:35.863 "reset": true, 00:12:35.863 "nvme_admin": false, 00:12:35.863 "nvme_io": false, 00:12:35.863 "nvme_io_md": false, 00:12:35.863 "write_zeroes": true, 00:12:35.863 "zcopy": true, 00:12:35.863 "get_zone_info": false, 00:12:35.863 "zone_management": false, 00:12:35.863 "zone_append": false, 00:12:35.863 "compare": false, 00:12:35.863 "compare_and_write": false, 00:12:35.863 "abort": true, 00:12:35.863 "seek_hole": false, 00:12:35.863 "seek_data": false, 00:12:35.863 "copy": true, 00:12:35.863 "nvme_iov_md": false 00:12:35.863 }, 00:12:35.863 "memory_domains": [ 00:12:35.863 { 00:12:35.863 "dma_device_id": "system", 00:12:35.863 "dma_device_type": 1 00:12:35.863 }, 00:12:35.863 { 00:12:35.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.863 "dma_device_type": 2 00:12:35.863 } 00:12:35.863 ], 00:12:35.863 "driver_specific": {} 00:12:35.863 }' 00:12:35.863 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:35.863 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:36.121 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:36.121 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.121 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:36.121 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:36.121 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:36.121 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:36.121 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:36.121 11:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:36.380 11:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:36.380 11:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:36.380 11:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:36.638 [2024-07-25 11:23:52.350931] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:36.638 [2024-07-25 11:23:52.350989] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.638 [2024-07-25 11:23:52.351091] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.638 [2024-07-25 11:23:52.351174] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.638 [2024-07-25 11:23:52.351191] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 71049 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 71049 ']' 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 71049 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71049 00:12:36.638 killing process with pid 71049 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71049' 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 71049 00:12:36.638 [2024-07-25 11:23:52.399961] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.638 11:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 71049 00:12:36.896 [2024-07-25 11:23:52.678725] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.270 11:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:38.270 00:12:38.270 real 0m33.979s 00:12:38.270 user 1m2.125s 00:12:38.270 sys 0m4.534s 00:12:38.270 ************************************ 00:12:38.270 END TEST raid_state_function_test_sb 00:12:38.270 ************************************ 00:12:38.270 11:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:38.270 11:23:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.270 11:23:53 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:38.270 11:23:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:38.270 11:23:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.270 11:23:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.270 ************************************ 00:12:38.270 START TEST raid_superblock_test 00:12:38.270 ************************************ 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:12:38.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=72036 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 72036 /var/tmp/spdk-raid.sock 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72036 ']' 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.270 11:23:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.270 [2024-07-25 11:23:54.037201] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:12:38.270 [2024-07-25 11:23:54.037361] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72036 ] 00:12:38.529 [2024-07-25 11:23:54.201521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.787 [2024-07-25 11:23:54.478837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.045 [2024-07-25 11:23:54.709814] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.045 [2024-07-25 11:23:54.709915] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:39.302 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:39.560 malloc1 00:12:39.560 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:39.818 [2024-07-25 11:23:55.605600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:39.818 [2024-07-25 11:23:55.605732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.818 [2024-07-25 11:23:55.605772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:39.818 [2024-07-25 11:23:55.605794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.818 [2024-07-25 11:23:55.609076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.818 [2024-07-25 11:23:55.609130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:39.818 pt1 00:12:39.818 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:39.818 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:39.818 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:12:39.818 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:12:39.818 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:39.818 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:39.818 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:39.818 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:39.818 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:40.076 malloc2 00:12:40.076 11:23:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:40.335 [2024-07-25 11:23:56.130295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:40.335 [2024-07-25 11:23:56.130429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.335 [2024-07-25 11:23:56.130467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:40.335 [2024-07-25 11:23:56.130491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.335 [2024-07-25 11:23:56.133549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.335 [2024-07-25 11:23:56.133599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:40.335 pt2 00:12:40.335 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:40.335 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:40.335 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:12:40.335 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:12:40.335 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:40.335 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:40.335 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:40.335 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:40.335 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:40.593 malloc3 00:12:40.852 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:40.852 [2024-07-25 11:23:56.705923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:40.852 [2024-07-25 11:23:56.706023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.852 [2024-07-25 11:23:56.706062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:40.852 [2024-07-25 11:23:56.706083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.852 [2024-07-25 11:23:56.709159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.852 [2024-07-25 11:23:56.709207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:40.852 pt3 00:12:40.852 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:40.852 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:40.852 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:41.110 [2024-07-25 11:23:56.978197] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:41.111 [2024-07-25 11:23:56.981042] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.111 [2024-07-25 11:23:56.981144] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:41.111 [2024-07-25 11:23:56.981447] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:41.111 [2024-07-25 11:23:56.981468] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:41.111 [2024-07-25 11:23:56.981943] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:41.111 [2024-07-25 11:23:56.982213] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:41.111 [2024-07-25 11:23:56.982245] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:41.111 [2024-07-25 11:23:56.982538] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.440 11:23:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.716 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.716 "name": "raid_bdev1", 00:12:41.716 "uuid": "e93b880a-8ad5-4069-9532-ab29aa1fadff", 00:12:41.716 "strip_size_kb": 64, 00:12:41.716 "state": "online", 00:12:41.716 "raid_level": "concat", 00:12:41.716 "superblock": true, 00:12:41.716 "num_base_bdevs": 3, 00:12:41.716 "num_base_bdevs_discovered": 3, 00:12:41.716 "num_base_bdevs_operational": 3, 00:12:41.716 "base_bdevs_list": [ 00:12:41.716 { 00:12:41.716 "name": "pt1", 00:12:41.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:41.716 "is_configured": true, 00:12:41.716 "data_offset": 2048, 00:12:41.716 "data_size": 63488 00:12:41.716 }, 00:12:41.716 { 00:12:41.716 "name": "pt2", 00:12:41.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:41.716 "is_configured": true, 00:12:41.716 "data_offset": 2048, 00:12:41.716 "data_size": 63488 00:12:41.716 }, 00:12:41.716 { 00:12:41.716 "name": "pt3", 00:12:41.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:41.716 "is_configured": true, 00:12:41.716 "data_offset": 2048, 00:12:41.716 "data_size": 63488 00:12:41.716 } 00:12:41.716 ] 00:12:41.716 }' 00:12:41.716 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.716 11:23:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.304 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:12:42.304 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:42.304 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:42.304 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:42.304 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:42.304 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:42.304 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:42.304 11:23:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:42.563 [2024-07-25 11:23:58.247301] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.563 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:42.563 "name": "raid_bdev1", 00:12:42.563 "aliases": [ 00:12:42.563 "e93b880a-8ad5-4069-9532-ab29aa1fadff" 00:12:42.563 ], 00:12:42.563 "product_name": "Raid Volume", 00:12:42.563 "block_size": 512, 00:12:42.563 "num_blocks": 190464, 00:12:42.563 "uuid": "e93b880a-8ad5-4069-9532-ab29aa1fadff", 00:12:42.563 "assigned_rate_limits": { 00:12:42.563 "rw_ios_per_sec": 0, 00:12:42.563 "rw_mbytes_per_sec": 0, 00:12:42.563 "r_mbytes_per_sec": 0, 00:12:42.563 "w_mbytes_per_sec": 0 00:12:42.563 }, 00:12:42.563 "claimed": false, 00:12:42.563 "zoned": false, 00:12:42.563 "supported_io_types": { 00:12:42.563 "read": true, 00:12:42.563 "write": true, 00:12:42.563 "unmap": true, 00:12:42.563 "flush": true, 00:12:42.563 "reset": true, 00:12:42.563 "nvme_admin": false, 00:12:42.563 "nvme_io": false, 00:12:42.563 "nvme_io_md": false, 00:12:42.563 "write_zeroes": true, 00:12:42.563 "zcopy": false, 00:12:42.563 "get_zone_info": false, 00:12:42.563 "zone_management": false, 00:12:42.563 "zone_append": false, 00:12:42.563 "compare": false, 00:12:42.563 "compare_and_write": false, 00:12:42.563 "abort": false, 00:12:42.563 "seek_hole": false, 00:12:42.563 "seek_data": false, 00:12:42.563 "copy": false, 00:12:42.563 "nvme_iov_md": false 00:12:42.563 }, 00:12:42.563 "memory_domains": [ 00:12:42.563 { 00:12:42.563 "dma_device_id": "system", 00:12:42.563 "dma_device_type": 1 00:12:42.563 }, 00:12:42.563 { 00:12:42.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.563 "dma_device_type": 2 00:12:42.563 }, 00:12:42.563 { 00:12:42.563 "dma_device_id": "system", 00:12:42.563 "dma_device_type": 1 00:12:42.563 }, 00:12:42.563 { 00:12:42.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.563 "dma_device_type": 2 00:12:42.563 }, 00:12:42.563 { 00:12:42.563 "dma_device_id": "system", 00:12:42.563 "dma_device_type": 1 00:12:42.563 }, 00:12:42.563 { 00:12:42.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.563 "dma_device_type": 2 00:12:42.563 } 00:12:42.563 ], 00:12:42.563 "driver_specific": { 00:12:42.563 "raid": { 00:12:42.563 "uuid": "e93b880a-8ad5-4069-9532-ab29aa1fadff", 00:12:42.563 "strip_size_kb": 64, 00:12:42.563 "state": "online", 00:12:42.563 "raid_level": "concat", 00:12:42.563 "superblock": true, 00:12:42.563 "num_base_bdevs": 3, 00:12:42.563 "num_base_bdevs_discovered": 3, 00:12:42.563 "num_base_bdevs_operational": 3, 00:12:42.563 "base_bdevs_list": [ 00:12:42.563 { 00:12:42.563 "name": "pt1", 00:12:42.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.563 "is_configured": true, 00:12:42.563 "data_offset": 2048, 00:12:42.563 "data_size": 63488 00:12:42.563 }, 00:12:42.563 { 00:12:42.563 "name": "pt2", 00:12:42.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:42.563 "is_configured": true, 00:12:42.563 "data_offset": 2048, 00:12:42.563 "data_size": 63488 00:12:42.563 }, 00:12:42.563 { 00:12:42.563 "name": "pt3", 00:12:42.563 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:42.563 "is_configured": true, 00:12:42.563 "data_offset": 2048, 00:12:42.563 "data_size": 63488 00:12:42.563 } 00:12:42.563 ] 00:12:42.563 } 00:12:42.563 } 00:12:42.563 }' 00:12:42.563 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.563 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:42.563 pt2 00:12:42.563 pt3' 00:12:42.563 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:42.563 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:42.563 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:42.822 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:42.822 "name": "pt1", 00:12:42.822 "aliases": [ 00:12:42.822 "00000000-0000-0000-0000-000000000001" 00:12:42.822 ], 00:12:42.822 "product_name": "passthru", 00:12:42.822 "block_size": 512, 00:12:42.822 "num_blocks": 65536, 00:12:42.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:42.822 "assigned_rate_limits": { 00:12:42.822 "rw_ios_per_sec": 0, 00:12:42.822 "rw_mbytes_per_sec": 0, 00:12:42.822 "r_mbytes_per_sec": 0, 00:12:42.822 "w_mbytes_per_sec": 0 00:12:42.822 }, 00:12:42.822 "claimed": true, 00:12:42.822 "claim_type": "exclusive_write", 00:12:42.822 "zoned": false, 00:12:42.822 "supported_io_types": { 00:12:42.822 "read": true, 00:12:42.822 "write": true, 00:12:42.822 "unmap": true, 00:12:42.822 "flush": true, 00:12:42.822 "reset": true, 00:12:42.822 "nvme_admin": false, 00:12:42.822 "nvme_io": false, 00:12:42.822 "nvme_io_md": false, 00:12:42.822 "write_zeroes": true, 00:12:42.822 "zcopy": true, 00:12:42.822 "get_zone_info": false, 00:12:42.822 "zone_management": false, 00:12:42.822 "zone_append": false, 00:12:42.822 "compare": false, 00:12:42.822 "compare_and_write": false, 00:12:42.822 "abort": true, 00:12:42.822 "seek_hole": false, 00:12:42.822 "seek_data": false, 00:12:42.822 "copy": true, 00:12:42.822 "nvme_iov_md": false 00:12:42.822 }, 00:12:42.822 "memory_domains": [ 00:12:42.822 { 00:12:42.822 "dma_device_id": "system", 00:12:42.822 "dma_device_type": 1 00:12:42.822 }, 00:12:42.822 { 00:12:42.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.822 "dma_device_type": 2 00:12:42.822 } 00:12:42.822 ], 00:12:42.822 "driver_specific": { 00:12:42.822 "passthru": { 00:12:42.822 "name": "pt1", 00:12:42.822 "base_bdev_name": "malloc1" 00:12:42.822 } 00:12:42.822 } 00:12:42.822 }' 00:12:42.822 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:42.822 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:42.822 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:42.822 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:42.822 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.080 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.080 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.080 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.080 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.080 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.080 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.080 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.080 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.081 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:43.081 11:23:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:43.339 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:43.339 "name": "pt2", 00:12:43.339 "aliases": [ 00:12:43.339 "00000000-0000-0000-0000-000000000002" 00:12:43.339 ], 00:12:43.339 "product_name": "passthru", 00:12:43.339 "block_size": 512, 00:12:43.339 "num_blocks": 65536, 00:12:43.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:43.339 "assigned_rate_limits": { 00:12:43.339 "rw_ios_per_sec": 0, 00:12:43.339 "rw_mbytes_per_sec": 0, 00:12:43.339 "r_mbytes_per_sec": 0, 00:12:43.339 "w_mbytes_per_sec": 0 00:12:43.339 }, 00:12:43.339 "claimed": true, 00:12:43.339 "claim_type": "exclusive_write", 00:12:43.339 "zoned": false, 00:12:43.339 "supported_io_types": { 00:12:43.339 "read": true, 00:12:43.339 "write": true, 00:12:43.339 "unmap": true, 00:12:43.339 "flush": true, 00:12:43.339 "reset": true, 00:12:43.339 "nvme_admin": false, 00:12:43.339 "nvme_io": false, 00:12:43.339 "nvme_io_md": false, 00:12:43.339 "write_zeroes": true, 00:12:43.339 "zcopy": true, 00:12:43.339 "get_zone_info": false, 00:12:43.339 "zone_management": false, 00:12:43.339 "zone_append": false, 00:12:43.339 "compare": false, 00:12:43.339 "compare_and_write": false, 00:12:43.339 "abort": true, 00:12:43.339 "seek_hole": false, 00:12:43.339 "seek_data": false, 00:12:43.339 "copy": true, 00:12:43.339 "nvme_iov_md": false 00:12:43.339 }, 00:12:43.339 "memory_domains": [ 00:12:43.339 { 00:12:43.339 "dma_device_id": "system", 00:12:43.339 "dma_device_type": 1 00:12:43.339 }, 00:12:43.339 { 00:12:43.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.339 "dma_device_type": 2 00:12:43.339 } 00:12:43.339 ], 00:12:43.339 "driver_specific": { 00:12:43.339 "passthru": { 00:12:43.339 "name": "pt2", 00:12:43.339 "base_bdev_name": "malloc2" 00:12:43.339 } 00:12:43.339 } 00:12:43.339 }' 00:12:43.339 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.339 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:43.598 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:43.598 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.598 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:43.598 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:43.598 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.598 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:43.598 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:43.598 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.856 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:43.856 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:43.856 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:43.856 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:43.856 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:44.115 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:44.115 "name": "pt3", 00:12:44.115 "aliases": [ 00:12:44.115 "00000000-0000-0000-0000-000000000003" 00:12:44.115 ], 00:12:44.115 "product_name": "passthru", 00:12:44.115 "block_size": 512, 00:12:44.115 "num_blocks": 65536, 00:12:44.115 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:44.115 "assigned_rate_limits": { 00:12:44.115 "rw_ios_per_sec": 0, 00:12:44.115 "rw_mbytes_per_sec": 0, 00:12:44.115 "r_mbytes_per_sec": 0, 00:12:44.115 "w_mbytes_per_sec": 0 00:12:44.115 }, 00:12:44.115 "claimed": true, 00:12:44.115 "claim_type": "exclusive_write", 00:12:44.115 "zoned": false, 00:12:44.115 "supported_io_types": { 00:12:44.115 "read": true, 00:12:44.115 "write": true, 00:12:44.115 "unmap": true, 00:12:44.115 "flush": true, 00:12:44.115 "reset": true, 00:12:44.115 "nvme_admin": false, 00:12:44.115 "nvme_io": false, 00:12:44.115 "nvme_io_md": false, 00:12:44.115 "write_zeroes": true, 00:12:44.115 "zcopy": true, 00:12:44.115 "get_zone_info": false, 00:12:44.115 "zone_management": false, 00:12:44.115 "zone_append": false, 00:12:44.115 "compare": false, 00:12:44.115 "compare_and_write": false, 00:12:44.115 "abort": true, 00:12:44.115 "seek_hole": false, 00:12:44.115 "seek_data": false, 00:12:44.115 "copy": true, 00:12:44.115 "nvme_iov_md": false 00:12:44.115 }, 00:12:44.115 "memory_domains": [ 00:12:44.115 { 00:12:44.115 "dma_device_id": "system", 00:12:44.115 "dma_device_type": 1 00:12:44.115 }, 00:12:44.115 { 00:12:44.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.115 "dma_device_type": 2 00:12:44.115 } 00:12:44.115 ], 00:12:44.115 "driver_specific": { 00:12:44.115 "passthru": { 00:12:44.115 "name": "pt3", 00:12:44.115 "base_bdev_name": "malloc3" 00:12:44.115 } 00:12:44.115 } 00:12:44.115 }' 00:12:44.115 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.115 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.115 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:44.115 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.115 11:23:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.372 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:44.372 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.372 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.372 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:44.372 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.372 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.372 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:44.372 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:12:44.372 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:44.630 [2024-07-25 11:24:00.419961] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.630 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=e93b880a-8ad5-4069-9532-ab29aa1fadff 00:12:44.630 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z e93b880a-8ad5-4069-9532-ab29aa1fadff ']' 00:12:44.630 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:44.889 [2024-07-25 11:24:00.655656] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.889 [2024-07-25 11:24:00.655705] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.889 [2024-07-25 11:24:00.655815] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.889 [2024-07-25 11:24:00.655900] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.889 [2024-07-25 11:24:00.655922] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:44.889 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.889 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:12:45.148 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:12:45.148 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:12:45.148 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:45.148 11:24:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:45.406 11:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:45.406 11:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:45.664 11:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:45.664 11:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:45.923 11:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:45.923 11:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:46.181 11:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:12:46.181 11:24:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:46.181 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:46.181 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:46.181 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.181 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.181 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.181 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.182 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.182 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.182 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.182 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:46.182 11:24:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:46.441 [2024-07-25 11:24:02.192131] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:46.441 [2024-07-25 11:24:02.195321] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:46.441 [2024-07-25 11:24:02.195422] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:46.441 [2024-07-25 11:24:02.195529] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:46.441 [2024-07-25 11:24:02.195650] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:46.441 [2024-07-25 11:24:02.195703] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:46.441 [2024-07-25 11:24:02.195733] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:46.441 [2024-07-25 11:24:02.195769] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:46.441 request: 00:12:46.441 { 00:12:46.441 "name": "raid_bdev1", 00:12:46.441 "raid_level": "concat", 00:12:46.441 "base_bdevs": [ 00:12:46.441 "malloc1", 00:12:46.441 "malloc2", 00:12:46.441 "malloc3" 00:12:46.441 ], 00:12:46.441 "strip_size_kb": 64, 00:12:46.441 "superblock": false, 00:12:46.441 "method": "bdev_raid_create", 00:12:46.441 "req_id": 1 00:12:46.441 } 00:12:46.441 Got JSON-RPC error response 00:12:46.441 response: 00:12:46.441 { 00:12:46.441 "code": -17, 00:12:46.441 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:46.441 } 00:12:46.441 11:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:46.441 11:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:46.441 11:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:46.441 11:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:46.441 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.441 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:12:46.699 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:12:46.699 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:12:46.699 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:46.957 [2024-07-25 11:24:02.660237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:46.957 [2024-07-25 11:24:02.660382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.957 [2024-07-25 11:24:02.660415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:46.957 [2024-07-25 11:24:02.660435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.957 [2024-07-25 11:24:02.663518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.957 [2024-07-25 11:24:02.663571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:46.957 [2024-07-25 11:24:02.663712] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:46.957 [2024-07-25 11:24:02.663800] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:46.957 pt1 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.957 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.214 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.214 "name": "raid_bdev1", 00:12:47.214 "uuid": "e93b880a-8ad5-4069-9532-ab29aa1fadff", 00:12:47.214 "strip_size_kb": 64, 00:12:47.214 "state": "configuring", 00:12:47.214 "raid_level": "concat", 00:12:47.214 "superblock": true, 00:12:47.214 "num_base_bdevs": 3, 00:12:47.214 "num_base_bdevs_discovered": 1, 00:12:47.214 "num_base_bdevs_operational": 3, 00:12:47.214 "base_bdevs_list": [ 00:12:47.214 { 00:12:47.214 "name": "pt1", 00:12:47.214 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.214 "is_configured": true, 00:12:47.214 "data_offset": 2048, 00:12:47.214 "data_size": 63488 00:12:47.214 }, 00:12:47.214 { 00:12:47.214 "name": null, 00:12:47.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.214 "is_configured": false, 00:12:47.214 "data_offset": 2048, 00:12:47.214 "data_size": 63488 00:12:47.214 }, 00:12:47.214 { 00:12:47.214 "name": null, 00:12:47.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.214 "is_configured": false, 00:12:47.214 "data_offset": 2048, 00:12:47.214 "data_size": 63488 00:12:47.214 } 00:12:47.214 ] 00:12:47.214 }' 00:12:47.214 11:24:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.214 11:24:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.780 11:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:12:47.780 11:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:48.039 [2024-07-25 11:24:03.846277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:48.039 [2024-07-25 11:24:03.846407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.039 [2024-07-25 11:24:03.846444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:48.039 [2024-07-25 11:24:03.846464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.039 [2024-07-25 11:24:03.847176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.039 [2024-07-25 11:24:03.847232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:48.039 [2024-07-25 11:24:03.847348] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:48.039 [2024-07-25 11:24:03.847392] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:48.039 pt2 00:12:48.039 11:24:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:48.297 [2024-07-25 11:24:04.106181] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.297 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.555 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.555 "name": "raid_bdev1", 00:12:48.555 "uuid": "e93b880a-8ad5-4069-9532-ab29aa1fadff", 00:12:48.555 "strip_size_kb": 64, 00:12:48.555 "state": "configuring", 00:12:48.555 "raid_level": "concat", 00:12:48.555 "superblock": true, 00:12:48.555 "num_base_bdevs": 3, 00:12:48.555 "num_base_bdevs_discovered": 1, 00:12:48.555 "num_base_bdevs_operational": 3, 00:12:48.555 "base_bdevs_list": [ 00:12:48.555 { 00:12:48.555 "name": "pt1", 00:12:48.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:48.555 "is_configured": true, 00:12:48.555 "data_offset": 2048, 00:12:48.555 "data_size": 63488 00:12:48.555 }, 00:12:48.555 { 00:12:48.555 "name": null, 00:12:48.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.555 "is_configured": false, 00:12:48.555 "data_offset": 2048, 00:12:48.555 "data_size": 63488 00:12:48.555 }, 00:12:48.555 { 00:12:48.555 "name": null, 00:12:48.555 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.555 "is_configured": false, 00:12:48.555 "data_offset": 2048, 00:12:48.555 "data_size": 63488 00:12:48.555 } 00:12:48.555 ] 00:12:48.555 }' 00:12:48.555 11:24:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.555 11:24:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.491 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:12:49.491 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:49.491 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:49.491 [2024-07-25 11:24:05.228370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:49.491 [2024-07-25 11:24:05.228479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.491 [2024-07-25 11:24:05.228518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:49.491 [2024-07-25 11:24:05.228535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.491 [2024-07-25 11:24:05.229163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.491 [2024-07-25 11:24:05.229198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:49.491 [2024-07-25 11:24:05.229318] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:49.491 [2024-07-25 11:24:05.229353] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:49.491 pt2 00:12:49.491 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:12:49.491 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:49.491 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:49.750 [2024-07-25 11:24:05.496481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:49.750 [2024-07-25 11:24:05.496887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.750 [2024-07-25 11:24:05.496977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:49.750 [2024-07-25 11:24:05.497113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.750 [2024-07-25 11:24:05.497758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.750 [2024-07-25 11:24:05.497922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:49.750 [2024-07-25 11:24:05.498165] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:49.750 [2024-07-25 11:24:05.498316] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:49.750 [2024-07-25 11:24:05.498613] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:49.750 [2024-07-25 11:24:05.498771] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:49.750 [2024-07-25 11:24:05.499147] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:49.750 [2024-07-25 11:24:05.499456] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:49.750 [2024-07-25 11:24:05.499597] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:49.750 [2024-07-25 11:24:05.499955] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.750 pt3 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.750 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.012 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:50.012 "name": "raid_bdev1", 00:12:50.012 "uuid": "e93b880a-8ad5-4069-9532-ab29aa1fadff", 00:12:50.012 "strip_size_kb": 64, 00:12:50.012 "state": "online", 00:12:50.012 "raid_level": "concat", 00:12:50.012 "superblock": true, 00:12:50.012 "num_base_bdevs": 3, 00:12:50.012 "num_base_bdevs_discovered": 3, 00:12:50.012 "num_base_bdevs_operational": 3, 00:12:50.012 "base_bdevs_list": [ 00:12:50.012 { 00:12:50.012 "name": "pt1", 00:12:50.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:50.012 "is_configured": true, 00:12:50.012 "data_offset": 2048, 00:12:50.012 "data_size": 63488 00:12:50.012 }, 00:12:50.012 { 00:12:50.012 "name": "pt2", 00:12:50.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:50.012 "is_configured": true, 00:12:50.012 "data_offset": 2048, 00:12:50.012 "data_size": 63488 00:12:50.012 }, 00:12:50.012 { 00:12:50.012 "name": "pt3", 00:12:50.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:50.012 "is_configured": true, 00:12:50.012 "data_offset": 2048, 00:12:50.012 "data_size": 63488 00:12:50.012 } 00:12:50.012 ] 00:12:50.012 }' 00:12:50.012 11:24:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:50.012 11:24:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.580 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:12:50.580 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:50.580 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:50.580 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:50.580 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:50.580 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:50.580 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:50.580 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:50.839 [2024-07-25 11:24:06.637165] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.839 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:50.839 "name": "raid_bdev1", 00:12:50.839 "aliases": [ 00:12:50.839 "e93b880a-8ad5-4069-9532-ab29aa1fadff" 00:12:50.839 ], 00:12:50.839 "product_name": "Raid Volume", 00:12:50.839 "block_size": 512, 00:12:50.839 "num_blocks": 190464, 00:12:50.839 "uuid": "e93b880a-8ad5-4069-9532-ab29aa1fadff", 00:12:50.839 "assigned_rate_limits": { 00:12:50.839 "rw_ios_per_sec": 0, 00:12:50.839 "rw_mbytes_per_sec": 0, 00:12:50.839 "r_mbytes_per_sec": 0, 00:12:50.839 "w_mbytes_per_sec": 0 00:12:50.839 }, 00:12:50.839 "claimed": false, 00:12:50.839 "zoned": false, 00:12:50.839 "supported_io_types": { 00:12:50.839 "read": true, 00:12:50.839 "write": true, 00:12:50.839 "unmap": true, 00:12:50.839 "flush": true, 00:12:50.839 "reset": true, 00:12:50.839 "nvme_admin": false, 00:12:50.839 "nvme_io": false, 00:12:50.839 "nvme_io_md": false, 00:12:50.839 "write_zeroes": true, 00:12:50.839 "zcopy": false, 00:12:50.839 "get_zone_info": false, 00:12:50.839 "zone_management": false, 00:12:50.839 "zone_append": false, 00:12:50.839 "compare": false, 00:12:50.839 "compare_and_write": false, 00:12:50.839 "abort": false, 00:12:50.839 "seek_hole": false, 00:12:50.839 "seek_data": false, 00:12:50.839 "copy": false, 00:12:50.839 "nvme_iov_md": false 00:12:50.839 }, 00:12:50.839 "memory_domains": [ 00:12:50.839 { 00:12:50.839 "dma_device_id": "system", 00:12:50.839 "dma_device_type": 1 00:12:50.839 }, 00:12:50.839 { 00:12:50.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.839 "dma_device_type": 2 00:12:50.839 }, 00:12:50.839 { 00:12:50.839 "dma_device_id": "system", 00:12:50.839 "dma_device_type": 1 00:12:50.839 }, 00:12:50.839 { 00:12:50.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.839 "dma_device_type": 2 00:12:50.839 }, 00:12:50.839 { 00:12:50.839 "dma_device_id": "system", 00:12:50.839 "dma_device_type": 1 00:12:50.839 }, 00:12:50.839 { 00:12:50.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.839 "dma_device_type": 2 00:12:50.839 } 00:12:50.839 ], 00:12:50.839 "driver_specific": { 00:12:50.839 "raid": { 00:12:50.839 "uuid": "e93b880a-8ad5-4069-9532-ab29aa1fadff", 00:12:50.839 "strip_size_kb": 64, 00:12:50.839 "state": "online", 00:12:50.839 "raid_level": "concat", 00:12:50.839 "superblock": true, 00:12:50.839 "num_base_bdevs": 3, 00:12:50.839 "num_base_bdevs_discovered": 3, 00:12:50.839 "num_base_bdevs_operational": 3, 00:12:50.839 "base_bdevs_list": [ 00:12:50.839 { 00:12:50.839 "name": "pt1", 00:12:50.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:50.839 "is_configured": true, 00:12:50.839 "data_offset": 2048, 00:12:50.839 "data_size": 63488 00:12:50.839 }, 00:12:50.839 { 00:12:50.839 "name": "pt2", 00:12:50.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:50.839 "is_configured": true, 00:12:50.839 "data_offset": 2048, 00:12:50.839 "data_size": 63488 00:12:50.839 }, 00:12:50.839 { 00:12:50.839 "name": "pt3", 00:12:50.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:50.839 "is_configured": true, 00:12:50.839 "data_offset": 2048, 00:12:50.839 "data_size": 63488 00:12:50.839 } 00:12:50.839 ] 00:12:50.839 } 00:12:50.839 } 00:12:50.839 }' 00:12:50.839 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:50.839 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:50.839 pt2 00:12:50.839 pt3' 00:12:50.839 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:50.839 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:50.839 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:51.098 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:51.098 "name": "pt1", 00:12:51.098 "aliases": [ 00:12:51.098 "00000000-0000-0000-0000-000000000001" 00:12:51.098 ], 00:12:51.098 "product_name": "passthru", 00:12:51.098 "block_size": 512, 00:12:51.098 "num_blocks": 65536, 00:12:51.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:51.098 "assigned_rate_limits": { 00:12:51.098 "rw_ios_per_sec": 0, 00:12:51.098 "rw_mbytes_per_sec": 0, 00:12:51.098 "r_mbytes_per_sec": 0, 00:12:51.098 "w_mbytes_per_sec": 0 00:12:51.098 }, 00:12:51.098 "claimed": true, 00:12:51.098 "claim_type": "exclusive_write", 00:12:51.098 "zoned": false, 00:12:51.098 "supported_io_types": { 00:12:51.098 "read": true, 00:12:51.098 "write": true, 00:12:51.098 "unmap": true, 00:12:51.098 "flush": true, 00:12:51.098 "reset": true, 00:12:51.098 "nvme_admin": false, 00:12:51.098 "nvme_io": false, 00:12:51.098 "nvme_io_md": false, 00:12:51.098 "write_zeroes": true, 00:12:51.098 "zcopy": true, 00:12:51.098 "get_zone_info": false, 00:12:51.098 "zone_management": false, 00:12:51.098 "zone_append": false, 00:12:51.098 "compare": false, 00:12:51.098 "compare_and_write": false, 00:12:51.098 "abort": true, 00:12:51.098 "seek_hole": false, 00:12:51.098 "seek_data": false, 00:12:51.098 "copy": true, 00:12:51.098 "nvme_iov_md": false 00:12:51.098 }, 00:12:51.098 "memory_domains": [ 00:12:51.098 { 00:12:51.098 "dma_device_id": "system", 00:12:51.098 "dma_device_type": 1 00:12:51.098 }, 00:12:51.098 { 00:12:51.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.098 "dma_device_type": 2 00:12:51.098 } 00:12:51.098 ], 00:12:51.098 "driver_specific": { 00:12:51.098 "passthru": { 00:12:51.098 "name": "pt1", 00:12:51.098 "base_bdev_name": "malloc1" 00:12:51.098 } 00:12:51.098 } 00:12:51.098 }' 00:12:51.098 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:51.356 11:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:51.356 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:51.356 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:51.356 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:51.356 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:51.356 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:51.356 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:51.356 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:51.356 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:51.615 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:51.615 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:51.615 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:51.615 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:51.615 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:51.873 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:51.873 "name": "pt2", 00:12:51.873 "aliases": [ 00:12:51.873 "00000000-0000-0000-0000-000000000002" 00:12:51.873 ], 00:12:51.873 "product_name": "passthru", 00:12:51.873 "block_size": 512, 00:12:51.873 "num_blocks": 65536, 00:12:51.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:51.873 "assigned_rate_limits": { 00:12:51.873 "rw_ios_per_sec": 0, 00:12:51.873 "rw_mbytes_per_sec": 0, 00:12:51.873 "r_mbytes_per_sec": 0, 00:12:51.873 "w_mbytes_per_sec": 0 00:12:51.873 }, 00:12:51.873 "claimed": true, 00:12:51.873 "claim_type": "exclusive_write", 00:12:51.873 "zoned": false, 00:12:51.873 "supported_io_types": { 00:12:51.873 "read": true, 00:12:51.873 "write": true, 00:12:51.873 "unmap": true, 00:12:51.873 "flush": true, 00:12:51.873 "reset": true, 00:12:51.873 "nvme_admin": false, 00:12:51.873 "nvme_io": false, 00:12:51.873 "nvme_io_md": false, 00:12:51.873 "write_zeroes": true, 00:12:51.873 "zcopy": true, 00:12:51.873 "get_zone_info": false, 00:12:51.873 "zone_management": false, 00:12:51.873 "zone_append": false, 00:12:51.873 "compare": false, 00:12:51.873 "compare_and_write": false, 00:12:51.873 "abort": true, 00:12:51.873 "seek_hole": false, 00:12:51.873 "seek_data": false, 00:12:51.873 "copy": true, 00:12:51.873 "nvme_iov_md": false 00:12:51.873 }, 00:12:51.873 "memory_domains": [ 00:12:51.873 { 00:12:51.873 "dma_device_id": "system", 00:12:51.873 "dma_device_type": 1 00:12:51.873 }, 00:12:51.873 { 00:12:51.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.873 "dma_device_type": 2 00:12:51.873 } 00:12:51.873 ], 00:12:51.873 "driver_specific": { 00:12:51.873 "passthru": { 00:12:51.873 "name": "pt2", 00:12:51.873 "base_bdev_name": "malloc2" 00:12:51.873 } 00:12:51.873 } 00:12:51.873 }' 00:12:51.873 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:51.873 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:51.873 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:51.873 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:51.873 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:52.131 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:52.131 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:52.131 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:52.131 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:52.131 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:52.131 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:52.131 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:52.131 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:52.131 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:52.131 11:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:52.389 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:52.389 "name": "pt3", 00:12:52.389 "aliases": [ 00:12:52.389 "00000000-0000-0000-0000-000000000003" 00:12:52.389 ], 00:12:52.389 "product_name": "passthru", 00:12:52.389 "block_size": 512, 00:12:52.389 "num_blocks": 65536, 00:12:52.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:52.389 "assigned_rate_limits": { 00:12:52.389 "rw_ios_per_sec": 0, 00:12:52.389 "rw_mbytes_per_sec": 0, 00:12:52.389 "r_mbytes_per_sec": 0, 00:12:52.389 "w_mbytes_per_sec": 0 00:12:52.389 }, 00:12:52.389 "claimed": true, 00:12:52.389 "claim_type": "exclusive_write", 00:12:52.389 "zoned": false, 00:12:52.389 "supported_io_types": { 00:12:52.389 "read": true, 00:12:52.389 "write": true, 00:12:52.389 "unmap": true, 00:12:52.390 "flush": true, 00:12:52.390 "reset": true, 00:12:52.390 "nvme_admin": false, 00:12:52.390 "nvme_io": false, 00:12:52.390 "nvme_io_md": false, 00:12:52.390 "write_zeroes": true, 00:12:52.390 "zcopy": true, 00:12:52.390 "get_zone_info": false, 00:12:52.390 "zone_management": false, 00:12:52.390 "zone_append": false, 00:12:52.390 "compare": false, 00:12:52.390 "compare_and_write": false, 00:12:52.390 "abort": true, 00:12:52.390 "seek_hole": false, 00:12:52.390 "seek_data": false, 00:12:52.390 "copy": true, 00:12:52.390 "nvme_iov_md": false 00:12:52.390 }, 00:12:52.390 "memory_domains": [ 00:12:52.390 { 00:12:52.390 "dma_device_id": "system", 00:12:52.390 "dma_device_type": 1 00:12:52.390 }, 00:12:52.390 { 00:12:52.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.390 "dma_device_type": 2 00:12:52.390 } 00:12:52.390 ], 00:12:52.390 "driver_specific": { 00:12:52.390 "passthru": { 00:12:52.390 "name": "pt3", 00:12:52.390 "base_bdev_name": "malloc3" 00:12:52.390 } 00:12:52.390 } 00:12:52.390 }' 00:12:52.390 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:52.666 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:52.666 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:52.666 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:52.666 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:52.666 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:52.666 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:52.666 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:52.925 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:52.925 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:52.925 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:52.925 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:52.925 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:52.925 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:12:53.184 [2024-07-25 11:24:08.900076] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' e93b880a-8ad5-4069-9532-ab29aa1fadff '!=' e93b880a-8ad5-4069-9532-ab29aa1fadff ']' 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 72036 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72036 ']' 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72036 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72036 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72036' 00:12:53.184 killing process with pid 72036 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72036 00:12:53.184 [2024-07-25 11:24:08.950375] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.184 11:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72036 00:12:53.184 [2024-07-25 11:24:08.950498] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.184 [2024-07-25 11:24:08.950584] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.184 [2024-07-25 11:24:08.950600] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:53.442 [2024-07-25 11:24:09.211418] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:54.816 11:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:12:54.816 00:12:54.816 real 0m16.433s 00:12:54.816 user 0m28.948s 00:12:54.816 sys 0m2.222s 00:12:54.816 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.816 ************************************ 00:12:54.816 END TEST raid_superblock_test 00:12:54.816 ************************************ 00:12:54.816 11:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.816 11:24:10 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:12:54.816 11:24:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:54.816 11:24:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.816 11:24:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:54.816 ************************************ 00:12:54.816 START TEST raid_read_error_test 00:12:54.816 ************************************ 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.n5R3eRGhtm 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=72516 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 72516 /var/tmp/spdk-raid.sock 00:12:54.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:54.816 11:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72516 ']' 00:12:54.817 11:24:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:54.817 11:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:54.817 11:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.817 11:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:54.817 11:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.817 11:24:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.817 [2024-07-25 11:24:10.558570] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:12:54.817 [2024-07-25 11:24:10.558783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72516 ] 00:12:55.075 [2024-07-25 11:24:10.733704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.333 [2024-07-25 11:24:10.968288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.333 [2024-07-25 11:24:11.169193] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.333 [2024-07-25 11:24:11.169274] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.898 11:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.899 11:24:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:55.899 11:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:55.899 11:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:55.899 BaseBdev1_malloc 00:12:55.899 11:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:56.208 true 00:12:56.208 11:24:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:56.482 [2024-07-25 11:24:12.203401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:56.482 [2024-07-25 11:24:12.203517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.482 [2024-07-25 11:24:12.203555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:56.482 [2024-07-25 11:24:12.203570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.482 [2024-07-25 11:24:12.206431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.482 [2024-07-25 11:24:12.206490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.482 BaseBdev1 00:12:56.482 11:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:56.482 11:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:56.740 BaseBdev2_malloc 00:12:56.740 11:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:56.998 true 00:12:56.998 11:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:57.256 [2024-07-25 11:24:12.919242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:57.256 [2024-07-25 11:24:12.919319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.256 [2024-07-25 11:24:12.919358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:57.256 [2024-07-25 11:24:12.919373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.256 [2024-07-25 11:24:12.922110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.256 [2024-07-25 11:24:12.922168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:57.256 BaseBdev2 00:12:57.256 11:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:57.256 11:24:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:57.515 BaseBdev3_malloc 00:12:57.515 11:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:57.773 true 00:12:57.773 11:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:58.030 [2024-07-25 11:24:13.735555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:58.030 [2024-07-25 11:24:13.735645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.030 [2024-07-25 11:24:13.735685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:58.030 [2024-07-25 11:24:13.735715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.030 [2024-07-25 11:24:13.738453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.030 [2024-07-25 11:24:13.738499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:58.030 BaseBdev3 00:12:58.030 11:24:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:58.287 [2024-07-25 11:24:14.019726] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.287 [2024-07-25 11:24:14.022347] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.287 [2024-07-25 11:24:14.022615] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:58.287 [2024-07-25 11:24:14.023051] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:58.287 [2024-07-25 11:24:14.023194] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:58.287 [2024-07-25 11:24:14.023683] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:58.287 [2024-07-25 11:24:14.024071] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:58.287 [2024-07-25 11:24:14.024200] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:58.287 [2024-07-25 11:24:14.024643] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.287 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.544 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:58.544 "name": "raid_bdev1", 00:12:58.544 "uuid": "aa042cb0-7374-49e8-a3ce-6115847b22c6", 00:12:58.544 "strip_size_kb": 64, 00:12:58.544 "state": "online", 00:12:58.544 "raid_level": "concat", 00:12:58.544 "superblock": true, 00:12:58.544 "num_base_bdevs": 3, 00:12:58.544 "num_base_bdevs_discovered": 3, 00:12:58.544 "num_base_bdevs_operational": 3, 00:12:58.544 "base_bdevs_list": [ 00:12:58.544 { 00:12:58.544 "name": "BaseBdev1", 00:12:58.544 "uuid": "671252e6-3a4c-581e-9eac-6cdf63a9d2db", 00:12:58.544 "is_configured": true, 00:12:58.544 "data_offset": 2048, 00:12:58.544 "data_size": 63488 00:12:58.544 }, 00:12:58.544 { 00:12:58.544 "name": "BaseBdev2", 00:12:58.544 "uuid": "9a12ed81-3b2c-513e-8522-420347792afe", 00:12:58.544 "is_configured": true, 00:12:58.544 "data_offset": 2048, 00:12:58.544 "data_size": 63488 00:12:58.544 }, 00:12:58.544 { 00:12:58.544 "name": "BaseBdev3", 00:12:58.544 "uuid": "edd2fc6c-91d1-5f8a-8a33-8a852e46637d", 00:12:58.544 "is_configured": true, 00:12:58.544 "data_offset": 2048, 00:12:58.544 "data_size": 63488 00:12:58.544 } 00:12:58.544 ] 00:12:58.544 }' 00:12:58.544 11:24:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:58.544 11:24:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.477 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:59.477 11:24:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:59.477 [2024-07-25 11:24:15.130324] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:00.412 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.413 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.671 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.671 "name": "raid_bdev1", 00:13:00.671 "uuid": "aa042cb0-7374-49e8-a3ce-6115847b22c6", 00:13:00.671 "strip_size_kb": 64, 00:13:00.671 "state": "online", 00:13:00.671 "raid_level": "concat", 00:13:00.671 "superblock": true, 00:13:00.671 "num_base_bdevs": 3, 00:13:00.671 "num_base_bdevs_discovered": 3, 00:13:00.671 "num_base_bdevs_operational": 3, 00:13:00.671 "base_bdevs_list": [ 00:13:00.671 { 00:13:00.671 "name": "BaseBdev1", 00:13:00.671 "uuid": "671252e6-3a4c-581e-9eac-6cdf63a9d2db", 00:13:00.671 "is_configured": true, 00:13:00.671 "data_offset": 2048, 00:13:00.671 "data_size": 63488 00:13:00.671 }, 00:13:00.671 { 00:13:00.671 "name": "BaseBdev2", 00:13:00.671 "uuid": "9a12ed81-3b2c-513e-8522-420347792afe", 00:13:00.671 "is_configured": true, 00:13:00.671 "data_offset": 2048, 00:13:00.671 "data_size": 63488 00:13:00.671 }, 00:13:00.671 { 00:13:00.671 "name": "BaseBdev3", 00:13:00.671 "uuid": "edd2fc6c-91d1-5f8a-8a33-8a852e46637d", 00:13:00.671 "is_configured": true, 00:13:00.671 "data_offset": 2048, 00:13:00.671 "data_size": 63488 00:13:00.671 } 00:13:00.671 ] 00:13:00.671 }' 00:13:00.671 11:24:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.671 11:24:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:01.606 [2024-07-25 11:24:17.423219] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.606 [2024-07-25 11:24:17.423580] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.606 [2024-07-25 11:24:17.426804] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.606 [2024-07-25 11:24:17.426868] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.606 [2024-07-25 11:24:17.426914] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.606 [2024-07-25 11:24:17.426930] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:01.606 0 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 72516 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72516 ']' 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72516 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72516 00:13:01.606 killing process with pid 72516 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72516' 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72516 00:13:01.606 [2024-07-25 11:24:17.467213] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.606 11:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72516 00:13:01.864 [2024-07-25 11:24:17.671752] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.237 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.n5R3eRGhtm 00:13:03.237 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:03.237 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:03.237 ************************************ 00:13:03.237 END TEST raid_read_error_test 00:13:03.237 ************************************ 00:13:03.237 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.44 00:13:03.237 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:13:03.237 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:03.237 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:03.237 11:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.44 != \0\.\0\0 ]] 00:13:03.237 00:13:03.237 real 0m8.460s 00:13:03.237 user 0m12.853s 00:13:03.237 sys 0m1.055s 00:13:03.237 11:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.237 11:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.237 11:24:18 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:03.237 11:24:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:03.237 11:24:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.237 11:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.237 ************************************ 00:13:03.237 START TEST raid_write_error_test 00:13:03.237 ************************************ 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.mHe8cBlbnf 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=72714 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 72714 /var/tmp/spdk-raid.sock 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72714 ']' 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:03.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.237 11:24:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.237 [2024-07-25 11:24:19.052100] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:13:03.237 [2024-07-25 11:24:19.052297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72714 ] 00:13:03.496 [2024-07-25 11:24:19.228951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.754 [2024-07-25 11:24:19.459046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.012 [2024-07-25 11:24:19.664497] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.012 [2024-07-25 11:24:19.664553] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.269 11:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.270 11:24:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:04.270 11:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:04.270 11:24:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.528 BaseBdev1_malloc 00:13:04.528 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:04.786 true 00:13:04.786 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:05.044 [2024-07-25 11:24:20.706234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:05.044 [2024-07-25 11:24:20.706376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.044 [2024-07-25 11:24:20.706435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:05.044 [2024-07-25 11:24:20.706469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.045 [2024-07-25 11:24:20.712584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.045 [2024-07-25 11:24:20.712643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:05.045 BaseBdev1 00:13:05.045 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:05.045 11:24:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:05.302 BaseBdev2_malloc 00:13:05.302 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:05.560 true 00:13:05.560 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:05.819 [2024-07-25 11:24:21.582703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:05.819 [2024-07-25 11:24:21.582783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.819 [2024-07-25 11:24:21.582823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:05.819 [2024-07-25 11:24:21.582844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.819 [2024-07-25 11:24:21.585631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.819 [2024-07-25 11:24:21.585675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:05.819 BaseBdev2 00:13:05.819 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:05.819 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:06.077 BaseBdev3_malloc 00:13:06.077 11:24:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:06.335 true 00:13:06.335 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:06.593 [2024-07-25 11:24:22.370991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:06.593 [2024-07-25 11:24:22.371271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.593 [2024-07-25 11:24:22.371324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:06.593 [2024-07-25 11:24:22.371341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.593 [2024-07-25 11:24:22.374860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.593 [2024-07-25 11:24:22.374920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:06.593 BaseBdev3 00:13:06.593 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:06.855 [2024-07-25 11:24:22.647297] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.855 [2024-07-25 11:24:22.649766] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.855 [2024-07-25 11:24:22.649888] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.855 [2024-07-25 11:24:22.650182] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:06.855 [2024-07-25 11:24:22.650217] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:06.855 [2024-07-25 11:24:22.650567] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:06.855 [2024-07-25 11:24:22.651012] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:06.855 [2024-07-25 11:24:22.651149] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:06.855 [2024-07-25 11:24:22.651558] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.855 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:06.855 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:06.855 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:06.856 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:06.856 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:06.856 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:06.856 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:06.856 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:06.856 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:06.856 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:06.856 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.856 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.114 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.114 "name": "raid_bdev1", 00:13:07.114 "uuid": "8dcbbfca-4497-44f9-907a-9b33a04e827e", 00:13:07.114 "strip_size_kb": 64, 00:13:07.114 "state": "online", 00:13:07.114 "raid_level": "concat", 00:13:07.114 "superblock": true, 00:13:07.114 "num_base_bdevs": 3, 00:13:07.114 "num_base_bdevs_discovered": 3, 00:13:07.114 "num_base_bdevs_operational": 3, 00:13:07.114 "base_bdevs_list": [ 00:13:07.114 { 00:13:07.114 "name": "BaseBdev1", 00:13:07.114 "uuid": "37d3e10d-00c9-5872-bf68-e3ed4f39539d", 00:13:07.114 "is_configured": true, 00:13:07.114 "data_offset": 2048, 00:13:07.114 "data_size": 63488 00:13:07.114 }, 00:13:07.114 { 00:13:07.114 "name": "BaseBdev2", 00:13:07.114 "uuid": "eedde9d3-fc56-5283-a5a2-b4a5253ccc72", 00:13:07.114 "is_configured": true, 00:13:07.114 "data_offset": 2048, 00:13:07.114 "data_size": 63488 00:13:07.114 }, 00:13:07.114 { 00:13:07.114 "name": "BaseBdev3", 00:13:07.114 "uuid": "ef14e30a-5dc9-5dd0-a835-d8aa48bddd8d", 00:13:07.114 "is_configured": true, 00:13:07.114 "data_offset": 2048, 00:13:07.114 "data_size": 63488 00:13:07.114 } 00:13:07.114 ] 00:13:07.114 }' 00:13:07.114 11:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.114 11:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.047 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:08.047 11:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:08.047 [2024-07-25 11:24:23.669307] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.981 11:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.238 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.238 "name": "raid_bdev1", 00:13:09.238 "uuid": "8dcbbfca-4497-44f9-907a-9b33a04e827e", 00:13:09.238 "strip_size_kb": 64, 00:13:09.238 "state": "online", 00:13:09.238 "raid_level": "concat", 00:13:09.238 "superblock": true, 00:13:09.238 "num_base_bdevs": 3, 00:13:09.238 "num_base_bdevs_discovered": 3, 00:13:09.238 "num_base_bdevs_operational": 3, 00:13:09.238 "base_bdevs_list": [ 00:13:09.238 { 00:13:09.238 "name": "BaseBdev1", 00:13:09.238 "uuid": "37d3e10d-00c9-5872-bf68-e3ed4f39539d", 00:13:09.238 "is_configured": true, 00:13:09.238 "data_offset": 2048, 00:13:09.238 "data_size": 63488 00:13:09.238 }, 00:13:09.238 { 00:13:09.238 "name": "BaseBdev2", 00:13:09.238 "uuid": "eedde9d3-fc56-5283-a5a2-b4a5253ccc72", 00:13:09.238 "is_configured": true, 00:13:09.238 "data_offset": 2048, 00:13:09.238 "data_size": 63488 00:13:09.238 }, 00:13:09.238 { 00:13:09.238 "name": "BaseBdev3", 00:13:09.238 "uuid": "ef14e30a-5dc9-5dd0-a835-d8aa48bddd8d", 00:13:09.238 "is_configured": true, 00:13:09.238 "data_offset": 2048, 00:13:09.238 "data_size": 63488 00:13:09.238 } 00:13:09.238 ] 00:13:09.238 }' 00:13:09.238 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.238 11:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.172 11:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:10.430 [2024-07-25 11:24:26.062969] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.430 [2024-07-25 11:24:26.063041] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.430 [2024-07-25 11:24:26.066253] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.430 [2024-07-25 11:24:26.066328] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.430 [2024-07-25 11:24:26.066377] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.430 [2024-07-25 11:24:26.066394] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:10.430 0 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 72714 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72714 ']' 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72714 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72714 00:13:10.430 killing process with pid 72714 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72714' 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72714 00:13:10.430 [2024-07-25 11:24:26.103371] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.430 11:24:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72714 00:13:10.430 [2024-07-25 11:24:26.311255] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.822 11:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.mHe8cBlbnf 00:13:11.822 11:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:11.822 11:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:11.822 11:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.42 00:13:11.822 11:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:13:11.822 11:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:11.822 11:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:11.822 11:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.42 != \0\.\0\0 ]] 00:13:11.822 00:13:11.822 real 0m8.591s 00:13:11.822 user 0m13.077s 00:13:11.822 sys 0m1.069s 00:13:11.822 11:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.822 11:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.822 ************************************ 00:13:11.822 END TEST raid_write_error_test 00:13:11.822 ************************************ 00:13:11.822 11:24:27 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:13:11.822 11:24:27 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:11.822 11:24:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:11.822 11:24:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.822 11:24:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.822 ************************************ 00:13:11.822 START TEST raid_state_function_test 00:13:11.822 ************************************ 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:11.822 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=72908 00:13:11.823 Process raid pid: 72908 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 72908' 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 72908 /var/tmp/spdk-raid.sock 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72908 ']' 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.823 11:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.823 [2024-07-25 11:24:27.675126] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:13:11.823 [2024-07-25 11:24:27.675290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.080 [2024-07-25 11:24:27.840729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.339 [2024-07-25 11:24:28.087079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.597 [2024-07-25 11:24:28.293045] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.597 [2024-07-25 11:24:28.293129] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.856 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.856 11:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:12.856 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:13.114 [2024-07-25 11:24:28.903943] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.114 [2024-07-25 11:24:28.904025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.114 [2024-07-25 11:24:28.904047] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.114 [2024-07-25 11:24:28.904061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.114 [2024-07-25 11:24:28.904073] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.114 [2024-07-25 11:24:28.904084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.114 11:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.372 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.372 "name": "Existed_Raid", 00:13:13.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.372 "strip_size_kb": 0, 00:13:13.372 "state": "configuring", 00:13:13.372 "raid_level": "raid1", 00:13:13.372 "superblock": false, 00:13:13.372 "num_base_bdevs": 3, 00:13:13.372 "num_base_bdevs_discovered": 0, 00:13:13.372 "num_base_bdevs_operational": 3, 00:13:13.372 "base_bdevs_list": [ 00:13:13.372 { 00:13:13.372 "name": "BaseBdev1", 00:13:13.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.372 "is_configured": false, 00:13:13.372 "data_offset": 0, 00:13:13.372 "data_size": 0 00:13:13.372 }, 00:13:13.372 { 00:13:13.372 "name": "BaseBdev2", 00:13:13.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.372 "is_configured": false, 00:13:13.372 "data_offset": 0, 00:13:13.372 "data_size": 0 00:13:13.372 }, 00:13:13.372 { 00:13:13.372 "name": "BaseBdev3", 00:13:13.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.372 "is_configured": false, 00:13:13.372 "data_offset": 0, 00:13:13.372 "data_size": 0 00:13:13.372 } 00:13:13.372 ] 00:13:13.372 }' 00:13:13.372 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.372 11:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.305 11:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:14.305 [2024-07-25 11:24:30.128112] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.306 [2024-07-25 11:24:30.128176] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:14.306 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:14.564 [2024-07-25 11:24:30.408195] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.564 [2024-07-25 11:24:30.408268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.564 [2024-07-25 11:24:30.408292] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.564 [2024-07-25 11:24:30.408305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.564 [2024-07-25 11:24:30.408317] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.564 [2024-07-25 11:24:30.408328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.564 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:15.129 [2024-07-25 11:24:30.720473] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.129 BaseBdev1 00:13:15.129 11:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:15.129 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:15.129 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:15.129 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:15.129 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:15.129 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:15.129 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:15.129 11:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:15.695 [ 00:13:15.695 { 00:13:15.695 "name": "BaseBdev1", 00:13:15.695 "aliases": [ 00:13:15.695 "2b3f31e8-941c-47c4-abb5-ddcd3454c545" 00:13:15.695 ], 00:13:15.695 "product_name": "Malloc disk", 00:13:15.695 "block_size": 512, 00:13:15.695 "num_blocks": 65536, 00:13:15.695 "uuid": "2b3f31e8-941c-47c4-abb5-ddcd3454c545", 00:13:15.695 "assigned_rate_limits": { 00:13:15.695 "rw_ios_per_sec": 0, 00:13:15.695 "rw_mbytes_per_sec": 0, 00:13:15.695 "r_mbytes_per_sec": 0, 00:13:15.695 "w_mbytes_per_sec": 0 00:13:15.695 }, 00:13:15.695 "claimed": true, 00:13:15.695 "claim_type": "exclusive_write", 00:13:15.695 "zoned": false, 00:13:15.695 "supported_io_types": { 00:13:15.695 "read": true, 00:13:15.695 "write": true, 00:13:15.695 "unmap": true, 00:13:15.695 "flush": true, 00:13:15.695 "reset": true, 00:13:15.695 "nvme_admin": false, 00:13:15.695 "nvme_io": false, 00:13:15.695 "nvme_io_md": false, 00:13:15.695 "write_zeroes": true, 00:13:15.695 "zcopy": true, 00:13:15.695 "get_zone_info": false, 00:13:15.695 "zone_management": false, 00:13:15.695 "zone_append": false, 00:13:15.695 "compare": false, 00:13:15.695 "compare_and_write": false, 00:13:15.695 "abort": true, 00:13:15.695 "seek_hole": false, 00:13:15.695 "seek_data": false, 00:13:15.695 "copy": true, 00:13:15.695 "nvme_iov_md": false 00:13:15.695 }, 00:13:15.695 "memory_domains": [ 00:13:15.695 { 00:13:15.695 "dma_device_id": "system", 00:13:15.695 "dma_device_type": 1 00:13:15.695 }, 00:13:15.695 { 00:13:15.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.695 "dma_device_type": 2 00:13:15.695 } 00:13:15.695 ], 00:13:15.695 "driver_specific": {} 00:13:15.695 } 00:13:15.695 ] 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.695 "name": "Existed_Raid", 00:13:15.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.695 "strip_size_kb": 0, 00:13:15.695 "state": "configuring", 00:13:15.695 "raid_level": "raid1", 00:13:15.695 "superblock": false, 00:13:15.695 "num_base_bdevs": 3, 00:13:15.695 "num_base_bdevs_discovered": 1, 00:13:15.695 "num_base_bdevs_operational": 3, 00:13:15.695 "base_bdevs_list": [ 00:13:15.695 { 00:13:15.695 "name": "BaseBdev1", 00:13:15.695 "uuid": "2b3f31e8-941c-47c4-abb5-ddcd3454c545", 00:13:15.695 "is_configured": true, 00:13:15.695 "data_offset": 0, 00:13:15.695 "data_size": 65536 00:13:15.695 }, 00:13:15.695 { 00:13:15.695 "name": "BaseBdev2", 00:13:15.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.695 "is_configured": false, 00:13:15.695 "data_offset": 0, 00:13:15.695 "data_size": 0 00:13:15.695 }, 00:13:15.695 { 00:13:15.695 "name": "BaseBdev3", 00:13:15.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.695 "is_configured": false, 00:13:15.695 "data_offset": 0, 00:13:15.695 "data_size": 0 00:13:15.695 } 00:13:15.695 ] 00:13:15.695 }' 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.695 11:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.632 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:16.632 [2024-07-25 11:24:32.432954] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:16.632 [2024-07-25 11:24:32.433050] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:16.632 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:16.889 [2024-07-25 11:24:32.665093] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.889 [2024-07-25 11:24:32.667465] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.890 [2024-07-25 11:24:32.667532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.890 [2024-07-25 11:24:32.667565] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:16.890 [2024-07-25 11:24:32.667578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.890 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.148 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.148 "name": "Existed_Raid", 00:13:17.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.148 "strip_size_kb": 0, 00:13:17.148 "state": "configuring", 00:13:17.148 "raid_level": "raid1", 00:13:17.148 "superblock": false, 00:13:17.148 "num_base_bdevs": 3, 00:13:17.148 "num_base_bdevs_discovered": 1, 00:13:17.148 "num_base_bdevs_operational": 3, 00:13:17.148 "base_bdevs_list": [ 00:13:17.148 { 00:13:17.148 "name": "BaseBdev1", 00:13:17.148 "uuid": "2b3f31e8-941c-47c4-abb5-ddcd3454c545", 00:13:17.148 "is_configured": true, 00:13:17.148 "data_offset": 0, 00:13:17.148 "data_size": 65536 00:13:17.148 }, 00:13:17.148 { 00:13:17.148 "name": "BaseBdev2", 00:13:17.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.148 "is_configured": false, 00:13:17.148 "data_offset": 0, 00:13:17.148 "data_size": 0 00:13:17.148 }, 00:13:17.148 { 00:13:17.148 "name": "BaseBdev3", 00:13:17.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.148 "is_configured": false, 00:13:17.148 "data_offset": 0, 00:13:17.148 "data_size": 0 00:13:17.148 } 00:13:17.148 ] 00:13:17.148 }' 00:13:17.148 11:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.148 11:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.714 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:17.972 [2024-07-25 11:24:33.818583] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.972 BaseBdev2 00:13:17.972 11:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:17.972 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:17.972 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:17.972 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:17.972 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:17.972 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:17.972 11:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:18.230 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:18.488 [ 00:13:18.488 { 00:13:18.488 "name": "BaseBdev2", 00:13:18.488 "aliases": [ 00:13:18.488 "af782a00-f62a-41b3-8168-b741532b9353" 00:13:18.488 ], 00:13:18.488 "product_name": "Malloc disk", 00:13:18.488 "block_size": 512, 00:13:18.488 "num_blocks": 65536, 00:13:18.488 "uuid": "af782a00-f62a-41b3-8168-b741532b9353", 00:13:18.488 "assigned_rate_limits": { 00:13:18.488 "rw_ios_per_sec": 0, 00:13:18.488 "rw_mbytes_per_sec": 0, 00:13:18.488 "r_mbytes_per_sec": 0, 00:13:18.488 "w_mbytes_per_sec": 0 00:13:18.488 }, 00:13:18.488 "claimed": true, 00:13:18.488 "claim_type": "exclusive_write", 00:13:18.488 "zoned": false, 00:13:18.488 "supported_io_types": { 00:13:18.488 "read": true, 00:13:18.488 "write": true, 00:13:18.488 "unmap": true, 00:13:18.488 "flush": true, 00:13:18.488 "reset": true, 00:13:18.488 "nvme_admin": false, 00:13:18.488 "nvme_io": false, 00:13:18.488 "nvme_io_md": false, 00:13:18.488 "write_zeroes": true, 00:13:18.488 "zcopy": true, 00:13:18.488 "get_zone_info": false, 00:13:18.488 "zone_management": false, 00:13:18.488 "zone_append": false, 00:13:18.488 "compare": false, 00:13:18.488 "compare_and_write": false, 00:13:18.488 "abort": true, 00:13:18.488 "seek_hole": false, 00:13:18.488 "seek_data": false, 00:13:18.488 "copy": true, 00:13:18.488 "nvme_iov_md": false 00:13:18.488 }, 00:13:18.488 "memory_domains": [ 00:13:18.488 { 00:13:18.488 "dma_device_id": "system", 00:13:18.488 "dma_device_type": 1 00:13:18.488 }, 00:13:18.488 { 00:13:18.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.488 "dma_device_type": 2 00:13:18.488 } 00:13:18.488 ], 00:13:18.488 "driver_specific": {} 00:13:18.488 } 00:13:18.488 ] 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.488 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.746 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.746 "name": "Existed_Raid", 00:13:18.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.746 "strip_size_kb": 0, 00:13:18.746 "state": "configuring", 00:13:18.746 "raid_level": "raid1", 00:13:18.746 "superblock": false, 00:13:18.746 "num_base_bdevs": 3, 00:13:18.746 "num_base_bdevs_discovered": 2, 00:13:18.746 "num_base_bdevs_operational": 3, 00:13:18.746 "base_bdevs_list": [ 00:13:18.746 { 00:13:18.746 "name": "BaseBdev1", 00:13:18.746 "uuid": "2b3f31e8-941c-47c4-abb5-ddcd3454c545", 00:13:18.746 "is_configured": true, 00:13:18.746 "data_offset": 0, 00:13:18.746 "data_size": 65536 00:13:18.746 }, 00:13:18.746 { 00:13:18.746 "name": "BaseBdev2", 00:13:18.746 "uuid": "af782a00-f62a-41b3-8168-b741532b9353", 00:13:18.746 "is_configured": true, 00:13:18.746 "data_offset": 0, 00:13:18.746 "data_size": 65536 00:13:18.746 }, 00:13:18.746 { 00:13:18.746 "name": "BaseBdev3", 00:13:18.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.746 "is_configured": false, 00:13:18.746 "data_offset": 0, 00:13:18.746 "data_size": 0 00:13:18.746 } 00:13:18.746 ] 00:13:18.746 }' 00:13:18.746 11:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.746 11:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.679 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:19.937 [2024-07-25 11:24:35.613189] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:19.937 [2024-07-25 11:24:35.613264] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:19.937 [2024-07-25 11:24:35.613277] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:19.937 [2024-07-25 11:24:35.613646] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:19.937 [2024-07-25 11:24:35.613898] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:19.937 [2024-07-25 11:24:35.613921] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:19.937 [2024-07-25 11:24:35.614219] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.937 BaseBdev3 00:13:19.937 11:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:19.937 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:19.937 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:19.937 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:19.937 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:19.937 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:19.937 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:20.195 11:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:20.453 [ 00:13:20.453 { 00:13:20.453 "name": "BaseBdev3", 00:13:20.453 "aliases": [ 00:13:20.453 "9878e9a5-cd75-484c-8e77-7359ce5b1375" 00:13:20.453 ], 00:13:20.453 "product_name": "Malloc disk", 00:13:20.453 "block_size": 512, 00:13:20.453 "num_blocks": 65536, 00:13:20.453 "uuid": "9878e9a5-cd75-484c-8e77-7359ce5b1375", 00:13:20.453 "assigned_rate_limits": { 00:13:20.453 "rw_ios_per_sec": 0, 00:13:20.453 "rw_mbytes_per_sec": 0, 00:13:20.453 "r_mbytes_per_sec": 0, 00:13:20.453 "w_mbytes_per_sec": 0 00:13:20.453 }, 00:13:20.453 "claimed": true, 00:13:20.453 "claim_type": "exclusive_write", 00:13:20.453 "zoned": false, 00:13:20.453 "supported_io_types": { 00:13:20.453 "read": true, 00:13:20.453 "write": true, 00:13:20.453 "unmap": true, 00:13:20.453 "flush": true, 00:13:20.453 "reset": true, 00:13:20.453 "nvme_admin": false, 00:13:20.453 "nvme_io": false, 00:13:20.453 "nvme_io_md": false, 00:13:20.453 "write_zeroes": true, 00:13:20.453 "zcopy": true, 00:13:20.453 "get_zone_info": false, 00:13:20.453 "zone_management": false, 00:13:20.453 "zone_append": false, 00:13:20.453 "compare": false, 00:13:20.453 "compare_and_write": false, 00:13:20.453 "abort": true, 00:13:20.453 "seek_hole": false, 00:13:20.453 "seek_data": false, 00:13:20.453 "copy": true, 00:13:20.453 "nvme_iov_md": false 00:13:20.453 }, 00:13:20.453 "memory_domains": [ 00:13:20.453 { 00:13:20.453 "dma_device_id": "system", 00:13:20.453 "dma_device_type": 1 00:13:20.453 }, 00:13:20.453 { 00:13:20.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.453 "dma_device_type": 2 00:13:20.453 } 00:13:20.453 ], 00:13:20.453 "driver_specific": {} 00:13:20.453 } 00:13:20.453 ] 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.453 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.711 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:20.711 "name": "Existed_Raid", 00:13:20.711 "uuid": "a4ce3953-a8ec-4950-985e-96467af1380b", 00:13:20.711 "strip_size_kb": 0, 00:13:20.711 "state": "online", 00:13:20.711 "raid_level": "raid1", 00:13:20.711 "superblock": false, 00:13:20.711 "num_base_bdevs": 3, 00:13:20.711 "num_base_bdevs_discovered": 3, 00:13:20.711 "num_base_bdevs_operational": 3, 00:13:20.711 "base_bdevs_list": [ 00:13:20.711 { 00:13:20.711 "name": "BaseBdev1", 00:13:20.711 "uuid": "2b3f31e8-941c-47c4-abb5-ddcd3454c545", 00:13:20.711 "is_configured": true, 00:13:20.711 "data_offset": 0, 00:13:20.711 "data_size": 65536 00:13:20.711 }, 00:13:20.711 { 00:13:20.711 "name": "BaseBdev2", 00:13:20.711 "uuid": "af782a00-f62a-41b3-8168-b741532b9353", 00:13:20.711 "is_configured": true, 00:13:20.711 "data_offset": 0, 00:13:20.711 "data_size": 65536 00:13:20.711 }, 00:13:20.711 { 00:13:20.711 "name": "BaseBdev3", 00:13:20.711 "uuid": "9878e9a5-cd75-484c-8e77-7359ce5b1375", 00:13:20.711 "is_configured": true, 00:13:20.711 "data_offset": 0, 00:13:20.711 "data_size": 65536 00:13:20.711 } 00:13:20.711 ] 00:13:20.711 }' 00:13:20.711 11:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:20.711 11:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.277 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:21.277 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:21.277 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:21.277 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:21.277 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:21.277 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:21.277 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:21.277 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:21.535 [2024-07-25 11:24:37.302122] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.535 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:21.535 "name": "Existed_Raid", 00:13:21.535 "aliases": [ 00:13:21.535 "a4ce3953-a8ec-4950-985e-96467af1380b" 00:13:21.535 ], 00:13:21.535 "product_name": "Raid Volume", 00:13:21.535 "block_size": 512, 00:13:21.535 "num_blocks": 65536, 00:13:21.535 "uuid": "a4ce3953-a8ec-4950-985e-96467af1380b", 00:13:21.535 "assigned_rate_limits": { 00:13:21.535 "rw_ios_per_sec": 0, 00:13:21.535 "rw_mbytes_per_sec": 0, 00:13:21.535 "r_mbytes_per_sec": 0, 00:13:21.535 "w_mbytes_per_sec": 0 00:13:21.535 }, 00:13:21.535 "claimed": false, 00:13:21.535 "zoned": false, 00:13:21.535 "supported_io_types": { 00:13:21.535 "read": true, 00:13:21.535 "write": true, 00:13:21.535 "unmap": false, 00:13:21.535 "flush": false, 00:13:21.535 "reset": true, 00:13:21.535 "nvme_admin": false, 00:13:21.535 "nvme_io": false, 00:13:21.536 "nvme_io_md": false, 00:13:21.536 "write_zeroes": true, 00:13:21.536 "zcopy": false, 00:13:21.536 "get_zone_info": false, 00:13:21.536 "zone_management": false, 00:13:21.536 "zone_append": false, 00:13:21.536 "compare": false, 00:13:21.536 "compare_and_write": false, 00:13:21.536 "abort": false, 00:13:21.536 "seek_hole": false, 00:13:21.536 "seek_data": false, 00:13:21.536 "copy": false, 00:13:21.536 "nvme_iov_md": false 00:13:21.536 }, 00:13:21.536 "memory_domains": [ 00:13:21.536 { 00:13:21.536 "dma_device_id": "system", 00:13:21.536 "dma_device_type": 1 00:13:21.536 }, 00:13:21.536 { 00:13:21.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.536 "dma_device_type": 2 00:13:21.536 }, 00:13:21.536 { 00:13:21.536 "dma_device_id": "system", 00:13:21.536 "dma_device_type": 1 00:13:21.536 }, 00:13:21.536 { 00:13:21.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.536 "dma_device_type": 2 00:13:21.536 }, 00:13:21.536 { 00:13:21.536 "dma_device_id": "system", 00:13:21.536 "dma_device_type": 1 00:13:21.536 }, 00:13:21.536 { 00:13:21.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.536 "dma_device_type": 2 00:13:21.536 } 00:13:21.536 ], 00:13:21.536 "driver_specific": { 00:13:21.536 "raid": { 00:13:21.536 "uuid": "a4ce3953-a8ec-4950-985e-96467af1380b", 00:13:21.536 "strip_size_kb": 0, 00:13:21.536 "state": "online", 00:13:21.536 "raid_level": "raid1", 00:13:21.536 "superblock": false, 00:13:21.536 "num_base_bdevs": 3, 00:13:21.536 "num_base_bdevs_discovered": 3, 00:13:21.536 "num_base_bdevs_operational": 3, 00:13:21.536 "base_bdevs_list": [ 00:13:21.536 { 00:13:21.536 "name": "BaseBdev1", 00:13:21.536 "uuid": "2b3f31e8-941c-47c4-abb5-ddcd3454c545", 00:13:21.536 "is_configured": true, 00:13:21.536 "data_offset": 0, 00:13:21.536 "data_size": 65536 00:13:21.536 }, 00:13:21.536 { 00:13:21.536 "name": "BaseBdev2", 00:13:21.536 "uuid": "af782a00-f62a-41b3-8168-b741532b9353", 00:13:21.536 "is_configured": true, 00:13:21.536 "data_offset": 0, 00:13:21.536 "data_size": 65536 00:13:21.536 }, 00:13:21.536 { 00:13:21.536 "name": "BaseBdev3", 00:13:21.536 "uuid": "9878e9a5-cd75-484c-8e77-7359ce5b1375", 00:13:21.536 "is_configured": true, 00:13:21.536 "data_offset": 0, 00:13:21.536 "data_size": 65536 00:13:21.536 } 00:13:21.536 ] 00:13:21.536 } 00:13:21.536 } 00:13:21.536 }' 00:13:21.536 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.536 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:21.536 BaseBdev2 00:13:21.536 BaseBdev3' 00:13:21.536 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:21.536 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:21.536 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:21.794 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:21.794 "name": "BaseBdev1", 00:13:21.794 "aliases": [ 00:13:21.794 "2b3f31e8-941c-47c4-abb5-ddcd3454c545" 00:13:21.794 ], 00:13:21.794 "product_name": "Malloc disk", 00:13:21.794 "block_size": 512, 00:13:21.794 "num_blocks": 65536, 00:13:21.794 "uuid": "2b3f31e8-941c-47c4-abb5-ddcd3454c545", 00:13:21.794 "assigned_rate_limits": { 00:13:21.794 "rw_ios_per_sec": 0, 00:13:21.794 "rw_mbytes_per_sec": 0, 00:13:21.794 "r_mbytes_per_sec": 0, 00:13:21.794 "w_mbytes_per_sec": 0 00:13:21.794 }, 00:13:21.794 "claimed": true, 00:13:21.794 "claim_type": "exclusive_write", 00:13:21.794 "zoned": false, 00:13:21.794 "supported_io_types": { 00:13:21.794 "read": true, 00:13:21.794 "write": true, 00:13:21.794 "unmap": true, 00:13:21.794 "flush": true, 00:13:21.794 "reset": true, 00:13:21.794 "nvme_admin": false, 00:13:21.794 "nvme_io": false, 00:13:21.794 "nvme_io_md": false, 00:13:21.794 "write_zeroes": true, 00:13:21.794 "zcopy": true, 00:13:21.794 "get_zone_info": false, 00:13:21.794 "zone_management": false, 00:13:21.794 "zone_append": false, 00:13:21.794 "compare": false, 00:13:21.794 "compare_and_write": false, 00:13:21.794 "abort": true, 00:13:21.794 "seek_hole": false, 00:13:21.794 "seek_data": false, 00:13:21.794 "copy": true, 00:13:21.794 "nvme_iov_md": false 00:13:21.794 }, 00:13:21.794 "memory_domains": [ 00:13:21.794 { 00:13:21.794 "dma_device_id": "system", 00:13:21.794 "dma_device_type": 1 00:13:21.794 }, 00:13:21.794 { 00:13:21.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.794 "dma_device_type": 2 00:13:21.794 } 00:13:21.794 ], 00:13:21.794 "driver_specific": {} 00:13:21.794 }' 00:13:21.794 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.794 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.052 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:22.052 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.052 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.052 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:22.052 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.052 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.052 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:22.052 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.310 11:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.310 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:22.310 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:22.310 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:22.310 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:22.569 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:22.569 "name": "BaseBdev2", 00:13:22.569 "aliases": [ 00:13:22.569 "af782a00-f62a-41b3-8168-b741532b9353" 00:13:22.569 ], 00:13:22.569 "product_name": "Malloc disk", 00:13:22.569 "block_size": 512, 00:13:22.569 "num_blocks": 65536, 00:13:22.569 "uuid": "af782a00-f62a-41b3-8168-b741532b9353", 00:13:22.569 "assigned_rate_limits": { 00:13:22.569 "rw_ios_per_sec": 0, 00:13:22.569 "rw_mbytes_per_sec": 0, 00:13:22.569 "r_mbytes_per_sec": 0, 00:13:22.569 "w_mbytes_per_sec": 0 00:13:22.569 }, 00:13:22.569 "claimed": true, 00:13:22.569 "claim_type": "exclusive_write", 00:13:22.569 "zoned": false, 00:13:22.569 "supported_io_types": { 00:13:22.569 "read": true, 00:13:22.569 "write": true, 00:13:22.569 "unmap": true, 00:13:22.569 "flush": true, 00:13:22.569 "reset": true, 00:13:22.569 "nvme_admin": false, 00:13:22.569 "nvme_io": false, 00:13:22.569 "nvme_io_md": false, 00:13:22.569 "write_zeroes": true, 00:13:22.569 "zcopy": true, 00:13:22.569 "get_zone_info": false, 00:13:22.569 "zone_management": false, 00:13:22.569 "zone_append": false, 00:13:22.569 "compare": false, 00:13:22.569 "compare_and_write": false, 00:13:22.569 "abort": true, 00:13:22.569 "seek_hole": false, 00:13:22.569 "seek_data": false, 00:13:22.569 "copy": true, 00:13:22.569 "nvme_iov_md": false 00:13:22.569 }, 00:13:22.569 "memory_domains": [ 00:13:22.569 { 00:13:22.569 "dma_device_id": "system", 00:13:22.569 "dma_device_type": 1 00:13:22.569 }, 00:13:22.569 { 00:13:22.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.569 "dma_device_type": 2 00:13:22.569 } 00:13:22.569 ], 00:13:22.569 "driver_specific": {} 00:13:22.569 }' 00:13:22.569 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.569 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.569 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:22.569 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.569 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.569 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:22.569 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.827 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.827 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:22.827 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.827 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.828 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:22.828 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:22.828 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:22.828 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:23.086 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:23.086 "name": "BaseBdev3", 00:13:23.086 "aliases": [ 00:13:23.086 "9878e9a5-cd75-484c-8e77-7359ce5b1375" 00:13:23.086 ], 00:13:23.086 "product_name": "Malloc disk", 00:13:23.086 "block_size": 512, 00:13:23.086 "num_blocks": 65536, 00:13:23.086 "uuid": "9878e9a5-cd75-484c-8e77-7359ce5b1375", 00:13:23.086 "assigned_rate_limits": { 00:13:23.086 "rw_ios_per_sec": 0, 00:13:23.086 "rw_mbytes_per_sec": 0, 00:13:23.086 "r_mbytes_per_sec": 0, 00:13:23.086 "w_mbytes_per_sec": 0 00:13:23.086 }, 00:13:23.086 "claimed": true, 00:13:23.086 "claim_type": "exclusive_write", 00:13:23.086 "zoned": false, 00:13:23.086 "supported_io_types": { 00:13:23.086 "read": true, 00:13:23.086 "write": true, 00:13:23.086 "unmap": true, 00:13:23.086 "flush": true, 00:13:23.086 "reset": true, 00:13:23.086 "nvme_admin": false, 00:13:23.086 "nvme_io": false, 00:13:23.086 "nvme_io_md": false, 00:13:23.086 "write_zeroes": true, 00:13:23.086 "zcopy": true, 00:13:23.086 "get_zone_info": false, 00:13:23.086 "zone_management": false, 00:13:23.086 "zone_append": false, 00:13:23.086 "compare": false, 00:13:23.086 "compare_and_write": false, 00:13:23.086 "abort": true, 00:13:23.086 "seek_hole": false, 00:13:23.086 "seek_data": false, 00:13:23.086 "copy": true, 00:13:23.086 "nvme_iov_md": false 00:13:23.086 }, 00:13:23.086 "memory_domains": [ 00:13:23.086 { 00:13:23.086 "dma_device_id": "system", 00:13:23.086 "dma_device_type": 1 00:13:23.086 }, 00:13:23.086 { 00:13:23.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.086 "dma_device_type": 2 00:13:23.086 } 00:13:23.086 ], 00:13:23.086 "driver_specific": {} 00:13:23.086 }' 00:13:23.086 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:23.086 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:23.086 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:23.086 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:23.343 11:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:23.343 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:23.343 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.343 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.343 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:23.343 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.343 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.601 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:23.601 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:23.601 [2024-07-25 11:24:39.462290] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.859 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.117 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.117 "name": "Existed_Raid", 00:13:24.117 "uuid": "a4ce3953-a8ec-4950-985e-96467af1380b", 00:13:24.117 "strip_size_kb": 0, 00:13:24.117 "state": "online", 00:13:24.117 "raid_level": "raid1", 00:13:24.117 "superblock": false, 00:13:24.117 "num_base_bdevs": 3, 00:13:24.117 "num_base_bdevs_discovered": 2, 00:13:24.117 "num_base_bdevs_operational": 2, 00:13:24.117 "base_bdevs_list": [ 00:13:24.117 { 00:13:24.117 "name": null, 00:13:24.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.117 "is_configured": false, 00:13:24.117 "data_offset": 0, 00:13:24.117 "data_size": 65536 00:13:24.117 }, 00:13:24.117 { 00:13:24.117 "name": "BaseBdev2", 00:13:24.117 "uuid": "af782a00-f62a-41b3-8168-b741532b9353", 00:13:24.117 "is_configured": true, 00:13:24.117 "data_offset": 0, 00:13:24.117 "data_size": 65536 00:13:24.117 }, 00:13:24.117 { 00:13:24.117 "name": "BaseBdev3", 00:13:24.117 "uuid": "9878e9a5-cd75-484c-8e77-7359ce5b1375", 00:13:24.117 "is_configured": true, 00:13:24.117 "data_offset": 0, 00:13:24.117 "data_size": 65536 00:13:24.117 } 00:13:24.117 ] 00:13:24.117 }' 00:13:24.117 11:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.117 11:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.683 11:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:24.683 11:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:24.683 11:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.683 11:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:24.957 11:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:24.957 11:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:24.958 11:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:25.226 [2024-07-25 11:24:40.960090] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.226 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:25.226 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:25.226 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.226 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:25.484 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:25.484 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.484 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:25.742 [2024-07-25 11:24:41.549602] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:25.742 [2024-07-25 11:24:41.549753] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:26.000 [2024-07-25 11:24:41.634383] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:26.000 [2024-07-25 11:24:41.634458] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:26.000 [2024-07-25 11:24:41.634473] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:26.000 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:26.000 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:26.000 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.000 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.257 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:26.257 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:26.257 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:26.257 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:26.257 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:26.257 11:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:26.516 BaseBdev2 00:13:26.516 11:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:26.516 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:26.516 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:26.516 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:26.516 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:26.516 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:26.516 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:26.773 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.031 [ 00:13:27.031 { 00:13:27.031 "name": "BaseBdev2", 00:13:27.031 "aliases": [ 00:13:27.031 "dd641e47-50b8-48ff-89bc-c3aba8c8eb79" 00:13:27.031 ], 00:13:27.031 "product_name": "Malloc disk", 00:13:27.031 "block_size": 512, 00:13:27.031 "num_blocks": 65536, 00:13:27.031 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:27.031 "assigned_rate_limits": { 00:13:27.031 "rw_ios_per_sec": 0, 00:13:27.031 "rw_mbytes_per_sec": 0, 00:13:27.031 "r_mbytes_per_sec": 0, 00:13:27.031 "w_mbytes_per_sec": 0 00:13:27.031 }, 00:13:27.031 "claimed": false, 00:13:27.031 "zoned": false, 00:13:27.031 "supported_io_types": { 00:13:27.031 "read": true, 00:13:27.031 "write": true, 00:13:27.031 "unmap": true, 00:13:27.031 "flush": true, 00:13:27.031 "reset": true, 00:13:27.031 "nvme_admin": false, 00:13:27.031 "nvme_io": false, 00:13:27.031 "nvme_io_md": false, 00:13:27.031 "write_zeroes": true, 00:13:27.031 "zcopy": true, 00:13:27.031 "get_zone_info": false, 00:13:27.031 "zone_management": false, 00:13:27.031 "zone_append": false, 00:13:27.031 "compare": false, 00:13:27.031 "compare_and_write": false, 00:13:27.031 "abort": true, 00:13:27.031 "seek_hole": false, 00:13:27.031 "seek_data": false, 00:13:27.031 "copy": true, 00:13:27.031 "nvme_iov_md": false 00:13:27.031 }, 00:13:27.031 "memory_domains": [ 00:13:27.031 { 00:13:27.031 "dma_device_id": "system", 00:13:27.031 "dma_device_type": 1 00:13:27.031 }, 00:13:27.031 { 00:13:27.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.031 "dma_device_type": 2 00:13:27.031 } 00:13:27.031 ], 00:13:27.031 "driver_specific": {} 00:13:27.031 } 00:13:27.031 ] 00:13:27.031 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:27.031 11:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:27.031 11:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:27.031 11:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:27.289 BaseBdev3 00:13:27.289 11:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:27.289 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:27.289 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:27.289 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:27.289 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:27.289 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:27.289 11:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:27.550 11:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:27.811 [ 00:13:27.811 { 00:13:27.811 "name": "BaseBdev3", 00:13:27.811 "aliases": [ 00:13:27.811 "bc318407-81f6-4299-8ae6-74b86c4ea5a5" 00:13:27.811 ], 00:13:27.811 "product_name": "Malloc disk", 00:13:27.811 "block_size": 512, 00:13:27.811 "num_blocks": 65536, 00:13:27.811 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:27.811 "assigned_rate_limits": { 00:13:27.811 "rw_ios_per_sec": 0, 00:13:27.811 "rw_mbytes_per_sec": 0, 00:13:27.811 "r_mbytes_per_sec": 0, 00:13:27.811 "w_mbytes_per_sec": 0 00:13:27.811 }, 00:13:27.811 "claimed": false, 00:13:27.811 "zoned": false, 00:13:27.811 "supported_io_types": { 00:13:27.811 "read": true, 00:13:27.811 "write": true, 00:13:27.811 "unmap": true, 00:13:27.811 "flush": true, 00:13:27.811 "reset": true, 00:13:27.811 "nvme_admin": false, 00:13:27.811 "nvme_io": false, 00:13:27.811 "nvme_io_md": false, 00:13:27.811 "write_zeroes": true, 00:13:27.811 "zcopy": true, 00:13:27.811 "get_zone_info": false, 00:13:27.811 "zone_management": false, 00:13:27.811 "zone_append": false, 00:13:27.811 "compare": false, 00:13:27.811 "compare_and_write": false, 00:13:27.811 "abort": true, 00:13:27.811 "seek_hole": false, 00:13:27.811 "seek_data": false, 00:13:27.811 "copy": true, 00:13:27.811 "nvme_iov_md": false 00:13:27.811 }, 00:13:27.811 "memory_domains": [ 00:13:27.811 { 00:13:27.811 "dma_device_id": "system", 00:13:27.811 "dma_device_type": 1 00:13:27.811 }, 00:13:27.811 { 00:13:27.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.811 "dma_device_type": 2 00:13:27.811 } 00:13:27.811 ], 00:13:27.811 "driver_specific": {} 00:13:27.811 } 00:13:27.811 ] 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:27.812 [2024-07-25 11:24:43.657898] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:27.812 [2024-07-25 11:24:43.657974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:27.812 [2024-07-25 11:24:43.658022] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.812 [2024-07-25 11:24:43.660390] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.812 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.070 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.070 "name": "Existed_Raid", 00:13:28.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.070 "strip_size_kb": 0, 00:13:28.070 "state": "configuring", 00:13:28.070 "raid_level": "raid1", 00:13:28.070 "superblock": false, 00:13:28.070 "num_base_bdevs": 3, 00:13:28.070 "num_base_bdevs_discovered": 2, 00:13:28.070 "num_base_bdevs_operational": 3, 00:13:28.070 "base_bdevs_list": [ 00:13:28.070 { 00:13:28.070 "name": "BaseBdev1", 00:13:28.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.070 "is_configured": false, 00:13:28.070 "data_offset": 0, 00:13:28.070 "data_size": 0 00:13:28.070 }, 00:13:28.070 { 00:13:28.070 "name": "BaseBdev2", 00:13:28.070 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:28.070 "is_configured": true, 00:13:28.070 "data_offset": 0, 00:13:28.070 "data_size": 65536 00:13:28.070 }, 00:13:28.070 { 00:13:28.070 "name": "BaseBdev3", 00:13:28.070 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:28.070 "is_configured": true, 00:13:28.070 "data_offset": 0, 00:13:28.070 "data_size": 65536 00:13:28.070 } 00:13:28.070 ] 00:13:28.070 }' 00:13:28.070 11:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.070 11:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:29.003 [2024-07-25 11:24:44.746213] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.003 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.261 11:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.261 "name": "Existed_Raid", 00:13:29.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.261 "strip_size_kb": 0, 00:13:29.261 "state": "configuring", 00:13:29.261 "raid_level": "raid1", 00:13:29.261 "superblock": false, 00:13:29.261 "num_base_bdevs": 3, 00:13:29.261 "num_base_bdevs_discovered": 1, 00:13:29.261 "num_base_bdevs_operational": 3, 00:13:29.261 "base_bdevs_list": [ 00:13:29.261 { 00:13:29.261 "name": "BaseBdev1", 00:13:29.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.261 "is_configured": false, 00:13:29.261 "data_offset": 0, 00:13:29.261 "data_size": 0 00:13:29.261 }, 00:13:29.261 { 00:13:29.261 "name": null, 00:13:29.261 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:29.261 "is_configured": false, 00:13:29.261 "data_offset": 0, 00:13:29.261 "data_size": 65536 00:13:29.261 }, 00:13:29.261 { 00:13:29.262 "name": "BaseBdev3", 00:13:29.262 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:29.262 "is_configured": true, 00:13:29.262 "data_offset": 0, 00:13:29.262 "data_size": 65536 00:13:29.262 } 00:13:29.262 ] 00:13:29.262 }' 00:13:29.262 11:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.262 11:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.829 11:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.829 11:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:30.087 11:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:30.087 11:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:30.345 [2024-07-25 11:24:46.221959] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.345 BaseBdev1 00:13:30.603 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:30.603 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:30.603 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:30.603 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:30.603 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:30.603 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:30.603 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:30.603 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:30.861 [ 00:13:30.861 { 00:13:30.861 "name": "BaseBdev1", 00:13:30.861 "aliases": [ 00:13:30.861 "e5d5b5b1-0332-455e-9b3e-c49f6379f06d" 00:13:30.861 ], 00:13:30.861 "product_name": "Malloc disk", 00:13:30.861 "block_size": 512, 00:13:30.861 "num_blocks": 65536, 00:13:30.861 "uuid": "e5d5b5b1-0332-455e-9b3e-c49f6379f06d", 00:13:30.861 "assigned_rate_limits": { 00:13:30.861 "rw_ios_per_sec": 0, 00:13:30.861 "rw_mbytes_per_sec": 0, 00:13:30.861 "r_mbytes_per_sec": 0, 00:13:30.861 "w_mbytes_per_sec": 0 00:13:30.861 }, 00:13:30.861 "claimed": true, 00:13:30.861 "claim_type": "exclusive_write", 00:13:30.861 "zoned": false, 00:13:30.861 "supported_io_types": { 00:13:30.861 "read": true, 00:13:30.861 "write": true, 00:13:30.861 "unmap": true, 00:13:30.861 "flush": true, 00:13:30.861 "reset": true, 00:13:30.861 "nvme_admin": false, 00:13:30.861 "nvme_io": false, 00:13:30.861 "nvme_io_md": false, 00:13:30.861 "write_zeroes": true, 00:13:30.861 "zcopy": true, 00:13:30.861 "get_zone_info": false, 00:13:30.861 "zone_management": false, 00:13:30.861 "zone_append": false, 00:13:30.861 "compare": false, 00:13:30.861 "compare_and_write": false, 00:13:30.861 "abort": true, 00:13:30.861 "seek_hole": false, 00:13:30.861 "seek_data": false, 00:13:30.861 "copy": true, 00:13:30.861 "nvme_iov_md": false 00:13:30.861 }, 00:13:30.861 "memory_domains": [ 00:13:30.861 { 00:13:30.861 "dma_device_id": "system", 00:13:30.861 "dma_device_type": 1 00:13:30.861 }, 00:13:30.861 { 00:13:30.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.861 "dma_device_type": 2 00:13:30.861 } 00:13:30.861 ], 00:13:30.861 "driver_specific": {} 00:13:30.861 } 00:13:30.861 ] 00:13:31.119 11:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:31.119 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:31.119 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:31.119 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:31.119 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:31.120 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:31.120 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:31.120 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.120 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.120 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.120 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.120 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.120 11:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.378 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.378 "name": "Existed_Raid", 00:13:31.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.378 "strip_size_kb": 0, 00:13:31.378 "state": "configuring", 00:13:31.378 "raid_level": "raid1", 00:13:31.378 "superblock": false, 00:13:31.378 "num_base_bdevs": 3, 00:13:31.378 "num_base_bdevs_discovered": 2, 00:13:31.378 "num_base_bdevs_operational": 3, 00:13:31.378 "base_bdevs_list": [ 00:13:31.378 { 00:13:31.378 "name": "BaseBdev1", 00:13:31.378 "uuid": "e5d5b5b1-0332-455e-9b3e-c49f6379f06d", 00:13:31.378 "is_configured": true, 00:13:31.378 "data_offset": 0, 00:13:31.378 "data_size": 65536 00:13:31.378 }, 00:13:31.378 { 00:13:31.378 "name": null, 00:13:31.378 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:31.378 "is_configured": false, 00:13:31.378 "data_offset": 0, 00:13:31.378 "data_size": 65536 00:13:31.378 }, 00:13:31.378 { 00:13:31.378 "name": "BaseBdev3", 00:13:31.378 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:31.378 "is_configured": true, 00:13:31.378 "data_offset": 0, 00:13:31.378 "data_size": 65536 00:13:31.378 } 00:13:31.378 ] 00:13:31.378 }' 00:13:31.378 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.378 11:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.943 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:31.943 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:32.201 11:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:32.459 [2024-07-25 11:24:48.154684] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.459 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.717 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:32.717 "name": "Existed_Raid", 00:13:32.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.717 "strip_size_kb": 0, 00:13:32.717 "state": "configuring", 00:13:32.717 "raid_level": "raid1", 00:13:32.717 "superblock": false, 00:13:32.717 "num_base_bdevs": 3, 00:13:32.717 "num_base_bdevs_discovered": 1, 00:13:32.717 "num_base_bdevs_operational": 3, 00:13:32.717 "base_bdevs_list": [ 00:13:32.717 { 00:13:32.717 "name": "BaseBdev1", 00:13:32.717 "uuid": "e5d5b5b1-0332-455e-9b3e-c49f6379f06d", 00:13:32.717 "is_configured": true, 00:13:32.717 "data_offset": 0, 00:13:32.717 "data_size": 65536 00:13:32.717 }, 00:13:32.717 { 00:13:32.717 "name": null, 00:13:32.717 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:32.717 "is_configured": false, 00:13:32.717 "data_offset": 0, 00:13:32.717 "data_size": 65536 00:13:32.717 }, 00:13:32.717 { 00:13:32.717 "name": null, 00:13:32.717 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:32.717 "is_configured": false, 00:13:32.717 "data_offset": 0, 00:13:32.717 "data_size": 65536 00:13:32.717 } 00:13:32.717 ] 00:13:32.717 }' 00:13:32.718 11:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:32.718 11:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.283 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.283 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.541 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:33.541 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:33.799 [2024-07-25 11:24:49.538973] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.799 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:33.799 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:33.799 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:33.799 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:33.799 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:33.799 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:33.800 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.800 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.800 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.800 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.800 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.800 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.057 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.057 "name": "Existed_Raid", 00:13:34.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.057 "strip_size_kb": 0, 00:13:34.057 "state": "configuring", 00:13:34.057 "raid_level": "raid1", 00:13:34.057 "superblock": false, 00:13:34.057 "num_base_bdevs": 3, 00:13:34.057 "num_base_bdevs_discovered": 2, 00:13:34.057 "num_base_bdevs_operational": 3, 00:13:34.057 "base_bdevs_list": [ 00:13:34.057 { 00:13:34.057 "name": "BaseBdev1", 00:13:34.057 "uuid": "e5d5b5b1-0332-455e-9b3e-c49f6379f06d", 00:13:34.057 "is_configured": true, 00:13:34.057 "data_offset": 0, 00:13:34.057 "data_size": 65536 00:13:34.057 }, 00:13:34.057 { 00:13:34.057 "name": null, 00:13:34.057 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:34.057 "is_configured": false, 00:13:34.057 "data_offset": 0, 00:13:34.057 "data_size": 65536 00:13:34.057 }, 00:13:34.057 { 00:13:34.057 "name": "BaseBdev3", 00:13:34.057 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:34.057 "is_configured": true, 00:13:34.057 "data_offset": 0, 00:13:34.057 "data_size": 65536 00:13:34.057 } 00:13:34.057 ] 00:13:34.057 }' 00:13:34.057 11:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.057 11:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.623 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.623 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:34.881 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:34.881 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:35.138 [2024-07-25 11:24:50.891412] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.138 11:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.396 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.396 "name": "Existed_Raid", 00:13:35.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.396 "strip_size_kb": 0, 00:13:35.396 "state": "configuring", 00:13:35.396 "raid_level": "raid1", 00:13:35.396 "superblock": false, 00:13:35.396 "num_base_bdevs": 3, 00:13:35.396 "num_base_bdevs_discovered": 1, 00:13:35.397 "num_base_bdevs_operational": 3, 00:13:35.397 "base_bdevs_list": [ 00:13:35.397 { 00:13:35.397 "name": null, 00:13:35.397 "uuid": "e5d5b5b1-0332-455e-9b3e-c49f6379f06d", 00:13:35.397 "is_configured": false, 00:13:35.397 "data_offset": 0, 00:13:35.397 "data_size": 65536 00:13:35.397 }, 00:13:35.397 { 00:13:35.397 "name": null, 00:13:35.397 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:35.397 "is_configured": false, 00:13:35.397 "data_offset": 0, 00:13:35.397 "data_size": 65536 00:13:35.397 }, 00:13:35.397 { 00:13:35.397 "name": "BaseBdev3", 00:13:35.397 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:35.397 "is_configured": true, 00:13:35.397 "data_offset": 0, 00:13:35.397 "data_size": 65536 00:13:35.397 } 00:13:35.397 ] 00:13:35.397 }' 00:13:35.397 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.397 11:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.374 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:36.374 11:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.374 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:36.374 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:36.632 [2024-07-25 11:24:52.401495] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.632 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.891 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:36.891 "name": "Existed_Raid", 00:13:36.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.891 "strip_size_kb": 0, 00:13:36.891 "state": "configuring", 00:13:36.891 "raid_level": "raid1", 00:13:36.891 "superblock": false, 00:13:36.891 "num_base_bdevs": 3, 00:13:36.891 "num_base_bdevs_discovered": 2, 00:13:36.891 "num_base_bdevs_operational": 3, 00:13:36.891 "base_bdevs_list": [ 00:13:36.891 { 00:13:36.891 "name": null, 00:13:36.891 "uuid": "e5d5b5b1-0332-455e-9b3e-c49f6379f06d", 00:13:36.891 "is_configured": false, 00:13:36.891 "data_offset": 0, 00:13:36.891 "data_size": 65536 00:13:36.891 }, 00:13:36.891 { 00:13:36.891 "name": "BaseBdev2", 00:13:36.891 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:36.891 "is_configured": true, 00:13:36.891 "data_offset": 0, 00:13:36.891 "data_size": 65536 00:13:36.891 }, 00:13:36.891 { 00:13:36.891 "name": "BaseBdev3", 00:13:36.891 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:36.891 "is_configured": true, 00:13:36.891 "data_offset": 0, 00:13:36.891 "data_size": 65536 00:13:36.891 } 00:13:36.891 ] 00:13:36.891 }' 00:13:36.891 11:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:36.891 11:24:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.458 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.458 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:37.715 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:37.715 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:37.715 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.041 11:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e5d5b5b1-0332-455e-9b3e-c49f6379f06d 00:13:38.323 [2024-07-25 11:24:54.042117] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:38.323 [2024-07-25 11:24:54.042171] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:38.323 [2024-07-25 11:24:54.042186] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:38.323 [2024-07-25 11:24:54.042528] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:38.323 [2024-07-25 11:24:54.042809] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:38.323 [2024-07-25 11:24:54.042826] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:38.323 [2024-07-25 11:24:54.043122] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.323 NewBaseBdev 00:13:38.323 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:38.323 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:38.323 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:38.323 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:38.323 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:38.323 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:38.323 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:38.581 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:38.839 [ 00:13:38.839 { 00:13:38.839 "name": "NewBaseBdev", 00:13:38.839 "aliases": [ 00:13:38.839 "e5d5b5b1-0332-455e-9b3e-c49f6379f06d" 00:13:38.839 ], 00:13:38.839 "product_name": "Malloc disk", 00:13:38.839 "block_size": 512, 00:13:38.839 "num_blocks": 65536, 00:13:38.839 "uuid": "e5d5b5b1-0332-455e-9b3e-c49f6379f06d", 00:13:38.839 "assigned_rate_limits": { 00:13:38.839 "rw_ios_per_sec": 0, 00:13:38.839 "rw_mbytes_per_sec": 0, 00:13:38.839 "r_mbytes_per_sec": 0, 00:13:38.839 "w_mbytes_per_sec": 0 00:13:38.839 }, 00:13:38.839 "claimed": true, 00:13:38.839 "claim_type": "exclusive_write", 00:13:38.839 "zoned": false, 00:13:38.839 "supported_io_types": { 00:13:38.839 "read": true, 00:13:38.839 "write": true, 00:13:38.839 "unmap": true, 00:13:38.839 "flush": true, 00:13:38.839 "reset": true, 00:13:38.839 "nvme_admin": false, 00:13:38.839 "nvme_io": false, 00:13:38.839 "nvme_io_md": false, 00:13:38.839 "write_zeroes": true, 00:13:38.839 "zcopy": true, 00:13:38.839 "get_zone_info": false, 00:13:38.839 "zone_management": false, 00:13:38.839 "zone_append": false, 00:13:38.839 "compare": false, 00:13:38.839 "compare_and_write": false, 00:13:38.839 "abort": true, 00:13:38.840 "seek_hole": false, 00:13:38.840 "seek_data": false, 00:13:38.840 "copy": true, 00:13:38.840 "nvme_iov_md": false 00:13:38.840 }, 00:13:38.840 "memory_domains": [ 00:13:38.840 { 00:13:38.840 "dma_device_id": "system", 00:13:38.840 "dma_device_type": 1 00:13:38.840 }, 00:13:38.840 { 00:13:38.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.840 "dma_device_type": 2 00:13:38.840 } 00:13:38.840 ], 00:13:38.840 "driver_specific": {} 00:13:38.840 } 00:13:38.840 ] 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.840 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.098 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.098 "name": "Existed_Raid", 00:13:39.098 "uuid": "e5bd7e0d-9b08-4d20-a281-4256f5d99865", 00:13:39.098 "strip_size_kb": 0, 00:13:39.098 "state": "online", 00:13:39.098 "raid_level": "raid1", 00:13:39.098 "superblock": false, 00:13:39.098 "num_base_bdevs": 3, 00:13:39.098 "num_base_bdevs_discovered": 3, 00:13:39.098 "num_base_bdevs_operational": 3, 00:13:39.098 "base_bdevs_list": [ 00:13:39.098 { 00:13:39.098 "name": "NewBaseBdev", 00:13:39.098 "uuid": "e5d5b5b1-0332-455e-9b3e-c49f6379f06d", 00:13:39.098 "is_configured": true, 00:13:39.098 "data_offset": 0, 00:13:39.098 "data_size": 65536 00:13:39.098 }, 00:13:39.098 { 00:13:39.098 "name": "BaseBdev2", 00:13:39.098 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:39.098 "is_configured": true, 00:13:39.098 "data_offset": 0, 00:13:39.098 "data_size": 65536 00:13:39.098 }, 00:13:39.098 { 00:13:39.098 "name": "BaseBdev3", 00:13:39.098 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:39.098 "is_configured": true, 00:13:39.098 "data_offset": 0, 00:13:39.098 "data_size": 65536 00:13:39.098 } 00:13:39.098 ] 00:13:39.098 }' 00:13:39.098 11:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.098 11:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.664 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:39.664 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:39.664 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:39.664 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:39.664 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:39.664 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:39.664 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:39.664 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:39.921 [2024-07-25 11:24:55.771103] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.921 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:39.921 "name": "Existed_Raid", 00:13:39.921 "aliases": [ 00:13:39.921 "e5bd7e0d-9b08-4d20-a281-4256f5d99865" 00:13:39.921 ], 00:13:39.921 "product_name": "Raid Volume", 00:13:39.921 "block_size": 512, 00:13:39.921 "num_blocks": 65536, 00:13:39.921 "uuid": "e5bd7e0d-9b08-4d20-a281-4256f5d99865", 00:13:39.921 "assigned_rate_limits": { 00:13:39.921 "rw_ios_per_sec": 0, 00:13:39.921 "rw_mbytes_per_sec": 0, 00:13:39.921 "r_mbytes_per_sec": 0, 00:13:39.921 "w_mbytes_per_sec": 0 00:13:39.921 }, 00:13:39.921 "claimed": false, 00:13:39.921 "zoned": false, 00:13:39.921 "supported_io_types": { 00:13:39.921 "read": true, 00:13:39.921 "write": true, 00:13:39.921 "unmap": false, 00:13:39.921 "flush": false, 00:13:39.921 "reset": true, 00:13:39.921 "nvme_admin": false, 00:13:39.921 "nvme_io": false, 00:13:39.921 "nvme_io_md": false, 00:13:39.921 "write_zeroes": true, 00:13:39.921 "zcopy": false, 00:13:39.921 "get_zone_info": false, 00:13:39.921 "zone_management": false, 00:13:39.921 "zone_append": false, 00:13:39.921 "compare": false, 00:13:39.921 "compare_and_write": false, 00:13:39.921 "abort": false, 00:13:39.921 "seek_hole": false, 00:13:39.921 "seek_data": false, 00:13:39.921 "copy": false, 00:13:39.921 "nvme_iov_md": false 00:13:39.921 }, 00:13:39.921 "memory_domains": [ 00:13:39.921 { 00:13:39.921 "dma_device_id": "system", 00:13:39.921 "dma_device_type": 1 00:13:39.921 }, 00:13:39.921 { 00:13:39.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.921 "dma_device_type": 2 00:13:39.921 }, 00:13:39.921 { 00:13:39.921 "dma_device_id": "system", 00:13:39.921 "dma_device_type": 1 00:13:39.921 }, 00:13:39.921 { 00:13:39.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.921 "dma_device_type": 2 00:13:39.921 }, 00:13:39.921 { 00:13:39.921 "dma_device_id": "system", 00:13:39.921 "dma_device_type": 1 00:13:39.921 }, 00:13:39.921 { 00:13:39.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.921 "dma_device_type": 2 00:13:39.921 } 00:13:39.921 ], 00:13:39.921 "driver_specific": { 00:13:39.921 "raid": { 00:13:39.921 "uuid": "e5bd7e0d-9b08-4d20-a281-4256f5d99865", 00:13:39.921 "strip_size_kb": 0, 00:13:39.921 "state": "online", 00:13:39.921 "raid_level": "raid1", 00:13:39.921 "superblock": false, 00:13:39.921 "num_base_bdevs": 3, 00:13:39.921 "num_base_bdevs_discovered": 3, 00:13:39.921 "num_base_bdevs_operational": 3, 00:13:39.921 "base_bdevs_list": [ 00:13:39.921 { 00:13:39.921 "name": "NewBaseBdev", 00:13:39.921 "uuid": "e5d5b5b1-0332-455e-9b3e-c49f6379f06d", 00:13:39.922 "is_configured": true, 00:13:39.922 "data_offset": 0, 00:13:39.922 "data_size": 65536 00:13:39.922 }, 00:13:39.922 { 00:13:39.922 "name": "BaseBdev2", 00:13:39.922 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:39.922 "is_configured": true, 00:13:39.922 "data_offset": 0, 00:13:39.922 "data_size": 65536 00:13:39.922 }, 00:13:39.922 { 00:13:39.922 "name": "BaseBdev3", 00:13:39.922 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:39.922 "is_configured": true, 00:13:39.922 "data_offset": 0, 00:13:39.922 "data_size": 65536 00:13:39.922 } 00:13:39.922 ] 00:13:39.922 } 00:13:39.922 } 00:13:39.922 }' 00:13:39.922 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:40.180 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:40.180 BaseBdev2 00:13:40.180 BaseBdev3' 00:13:40.180 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:40.180 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:40.180 11:24:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.438 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.438 "name": "NewBaseBdev", 00:13:40.438 "aliases": [ 00:13:40.438 "e5d5b5b1-0332-455e-9b3e-c49f6379f06d" 00:13:40.438 ], 00:13:40.438 "product_name": "Malloc disk", 00:13:40.438 "block_size": 512, 00:13:40.438 "num_blocks": 65536, 00:13:40.438 "uuid": "e5d5b5b1-0332-455e-9b3e-c49f6379f06d", 00:13:40.438 "assigned_rate_limits": { 00:13:40.438 "rw_ios_per_sec": 0, 00:13:40.438 "rw_mbytes_per_sec": 0, 00:13:40.438 "r_mbytes_per_sec": 0, 00:13:40.438 "w_mbytes_per_sec": 0 00:13:40.438 }, 00:13:40.438 "claimed": true, 00:13:40.438 "claim_type": "exclusive_write", 00:13:40.438 "zoned": false, 00:13:40.438 "supported_io_types": { 00:13:40.438 "read": true, 00:13:40.438 "write": true, 00:13:40.438 "unmap": true, 00:13:40.438 "flush": true, 00:13:40.438 "reset": true, 00:13:40.438 "nvme_admin": false, 00:13:40.438 "nvme_io": false, 00:13:40.438 "nvme_io_md": false, 00:13:40.438 "write_zeroes": true, 00:13:40.438 "zcopy": true, 00:13:40.438 "get_zone_info": false, 00:13:40.438 "zone_management": false, 00:13:40.438 "zone_append": false, 00:13:40.438 "compare": false, 00:13:40.438 "compare_and_write": false, 00:13:40.438 "abort": true, 00:13:40.438 "seek_hole": false, 00:13:40.438 "seek_data": false, 00:13:40.438 "copy": true, 00:13:40.438 "nvme_iov_md": false 00:13:40.438 }, 00:13:40.438 "memory_domains": [ 00:13:40.438 { 00:13:40.438 "dma_device_id": "system", 00:13:40.438 "dma_device_type": 1 00:13:40.438 }, 00:13:40.438 { 00:13:40.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.438 "dma_device_type": 2 00:13:40.438 } 00:13:40.438 ], 00:13:40.438 "driver_specific": {} 00:13:40.438 }' 00:13:40.438 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.438 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.438 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.438 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.438 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.696 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.696 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.696 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.696 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.696 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.696 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.696 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:40.696 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:40.696 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:40.696 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.953 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.953 "name": "BaseBdev2", 00:13:40.953 "aliases": [ 00:13:40.953 "dd641e47-50b8-48ff-89bc-c3aba8c8eb79" 00:13:40.953 ], 00:13:40.953 "product_name": "Malloc disk", 00:13:40.953 "block_size": 512, 00:13:40.953 "num_blocks": 65536, 00:13:40.953 "uuid": "dd641e47-50b8-48ff-89bc-c3aba8c8eb79", 00:13:40.953 "assigned_rate_limits": { 00:13:40.953 "rw_ios_per_sec": 0, 00:13:40.953 "rw_mbytes_per_sec": 0, 00:13:40.953 "r_mbytes_per_sec": 0, 00:13:40.953 "w_mbytes_per_sec": 0 00:13:40.953 }, 00:13:40.953 "claimed": true, 00:13:40.953 "claim_type": "exclusive_write", 00:13:40.953 "zoned": false, 00:13:40.953 "supported_io_types": { 00:13:40.953 "read": true, 00:13:40.953 "write": true, 00:13:40.953 "unmap": true, 00:13:40.953 "flush": true, 00:13:40.953 "reset": true, 00:13:40.953 "nvme_admin": false, 00:13:40.953 "nvme_io": false, 00:13:40.953 "nvme_io_md": false, 00:13:40.953 "write_zeroes": true, 00:13:40.953 "zcopy": true, 00:13:40.953 "get_zone_info": false, 00:13:40.953 "zone_management": false, 00:13:40.953 "zone_append": false, 00:13:40.953 "compare": false, 00:13:40.953 "compare_and_write": false, 00:13:40.953 "abort": true, 00:13:40.953 "seek_hole": false, 00:13:40.953 "seek_data": false, 00:13:40.953 "copy": true, 00:13:40.953 "nvme_iov_md": false 00:13:40.953 }, 00:13:40.953 "memory_domains": [ 00:13:40.953 { 00:13:40.953 "dma_device_id": "system", 00:13:40.953 "dma_device_type": 1 00:13:40.953 }, 00:13:40.953 { 00:13:40.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.953 "dma_device_type": 2 00:13:40.953 } 00:13:40.953 ], 00:13:40.953 "driver_specific": {} 00:13:40.953 }' 00:13:40.953 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.210 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.210 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:41.210 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.210 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.210 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:41.210 11:24:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.210 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.210 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:41.210 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.468 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.468 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.468 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:41.468 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:41.468 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:41.727 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:41.727 "name": "BaseBdev3", 00:13:41.727 "aliases": [ 00:13:41.727 "bc318407-81f6-4299-8ae6-74b86c4ea5a5" 00:13:41.727 ], 00:13:41.727 "product_name": "Malloc disk", 00:13:41.727 "block_size": 512, 00:13:41.727 "num_blocks": 65536, 00:13:41.727 "uuid": "bc318407-81f6-4299-8ae6-74b86c4ea5a5", 00:13:41.727 "assigned_rate_limits": { 00:13:41.727 "rw_ios_per_sec": 0, 00:13:41.727 "rw_mbytes_per_sec": 0, 00:13:41.727 "r_mbytes_per_sec": 0, 00:13:41.727 "w_mbytes_per_sec": 0 00:13:41.727 }, 00:13:41.727 "claimed": true, 00:13:41.727 "claim_type": "exclusive_write", 00:13:41.727 "zoned": false, 00:13:41.727 "supported_io_types": { 00:13:41.727 "read": true, 00:13:41.727 "write": true, 00:13:41.727 "unmap": true, 00:13:41.727 "flush": true, 00:13:41.727 "reset": true, 00:13:41.727 "nvme_admin": false, 00:13:41.727 "nvme_io": false, 00:13:41.727 "nvme_io_md": false, 00:13:41.727 "write_zeroes": true, 00:13:41.727 "zcopy": true, 00:13:41.727 "get_zone_info": false, 00:13:41.727 "zone_management": false, 00:13:41.727 "zone_append": false, 00:13:41.727 "compare": false, 00:13:41.727 "compare_and_write": false, 00:13:41.727 "abort": true, 00:13:41.727 "seek_hole": false, 00:13:41.727 "seek_data": false, 00:13:41.727 "copy": true, 00:13:41.727 "nvme_iov_md": false 00:13:41.727 }, 00:13:41.727 "memory_domains": [ 00:13:41.727 { 00:13:41.727 "dma_device_id": "system", 00:13:41.727 "dma_device_type": 1 00:13:41.727 }, 00:13:41.727 { 00:13:41.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.727 "dma_device_type": 2 00:13:41.727 } 00:13:41.727 ], 00:13:41.727 "driver_specific": {} 00:13:41.727 }' 00:13:41.727 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.727 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.727 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:41.727 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.727 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.727 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:41.727 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.985 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.985 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:41.985 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.985 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.985 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.985 11:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:42.243 [2024-07-25 11:24:58.003261] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.243 [2024-07-25 11:24:58.003307] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.243 [2024-07-25 11:24:58.003432] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.243 [2024-07-25 11:24:58.003802] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.243 [2024-07-25 11:24:58.003833] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 72908 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72908 ']' 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72908 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72908 00:13:42.243 killing process with pid 72908 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72908' 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72908 00:13:42.243 [2024-07-25 11:24:58.043508] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.243 11:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72908 00:13:42.500 [2024-07-25 11:24:58.308921] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:43.925 00:13:43.925 real 0m31.896s 00:13:43.925 user 0m58.615s 00:13:43.925 sys 0m4.027s 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.925 ************************************ 00:13:43.925 END TEST raid_state_function_test 00:13:43.925 ************************************ 00:13:43.925 11:24:59 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:13:43.925 11:24:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:43.925 11:24:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.925 11:24:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.925 ************************************ 00:13:43.925 START TEST raid_state_function_test_sb 00:13:43.925 ************************************ 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=73879 00:13:43.925 Process raid pid: 73879 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 73879' 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 73879 /var/tmp/spdk-raid.sock 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73879 ']' 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.925 11:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.925 [2024-07-25 11:24:59.685098] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:13:43.925 [2024-07-25 11:24:59.685294] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.187 [2024-07-25 11:24:59.862845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.444 [2024-07-25 11:25:00.150959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.700 [2024-07-25 11:25:00.354334] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.700 [2024-07-25 11:25:00.354397] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.958 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.958 11:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:44.958 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:45.215 [2024-07-25 11:25:00.898838] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:45.215 [2024-07-25 11:25:00.898916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:45.215 [2024-07-25 11:25:00.898937] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:45.215 [2024-07-25 11:25:00.898951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:45.215 [2024-07-25 11:25:00.898965] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:45.215 [2024-07-25 11:25:00.898977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.215 11:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.472 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.472 "name": "Existed_Raid", 00:13:45.472 "uuid": "15d0938d-8cd6-4541-b56d-b4699c13f043", 00:13:45.472 "strip_size_kb": 0, 00:13:45.472 "state": "configuring", 00:13:45.472 "raid_level": "raid1", 00:13:45.472 "superblock": true, 00:13:45.472 "num_base_bdevs": 3, 00:13:45.472 "num_base_bdevs_discovered": 0, 00:13:45.472 "num_base_bdevs_operational": 3, 00:13:45.472 "base_bdevs_list": [ 00:13:45.472 { 00:13:45.472 "name": "BaseBdev1", 00:13:45.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.472 "is_configured": false, 00:13:45.472 "data_offset": 0, 00:13:45.472 "data_size": 0 00:13:45.472 }, 00:13:45.472 { 00:13:45.472 "name": "BaseBdev2", 00:13:45.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.472 "is_configured": false, 00:13:45.472 "data_offset": 0, 00:13:45.472 "data_size": 0 00:13:45.472 }, 00:13:45.472 { 00:13:45.472 "name": "BaseBdev3", 00:13:45.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.472 "is_configured": false, 00:13:45.472 "data_offset": 0, 00:13:45.472 "data_size": 0 00:13:45.472 } 00:13:45.472 ] 00:13:45.472 }' 00:13:45.472 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.472 11:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.037 11:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:46.293 [2024-07-25 11:25:02.055097] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.293 [2024-07-25 11:25:02.055149] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:46.293 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:46.549 [2024-07-25 11:25:02.371224] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.549 [2024-07-25 11:25:02.371300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.549 [2024-07-25 11:25:02.371323] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.549 [2024-07-25 11:25:02.371337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.549 [2024-07-25 11:25:02.371350] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:46.549 [2024-07-25 11:25:02.371362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:46.549 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:46.806 [2024-07-25 11:25:02.652465] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.806 BaseBdev1 00:13:46.806 11:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:46.806 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:46.806 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:46.806 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:46.806 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:46.806 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:46.806 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:47.370 11:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:47.370 [ 00:13:47.370 { 00:13:47.370 "name": "BaseBdev1", 00:13:47.370 "aliases": [ 00:13:47.370 "6cc40008-d9c2-4829-b751-3a974f1e7457" 00:13:47.370 ], 00:13:47.370 "product_name": "Malloc disk", 00:13:47.370 "block_size": 512, 00:13:47.370 "num_blocks": 65536, 00:13:47.370 "uuid": "6cc40008-d9c2-4829-b751-3a974f1e7457", 00:13:47.370 "assigned_rate_limits": { 00:13:47.370 "rw_ios_per_sec": 0, 00:13:47.370 "rw_mbytes_per_sec": 0, 00:13:47.370 "r_mbytes_per_sec": 0, 00:13:47.370 "w_mbytes_per_sec": 0 00:13:47.370 }, 00:13:47.370 "claimed": true, 00:13:47.370 "claim_type": "exclusive_write", 00:13:47.370 "zoned": false, 00:13:47.370 "supported_io_types": { 00:13:47.370 "read": true, 00:13:47.370 "write": true, 00:13:47.370 "unmap": true, 00:13:47.370 "flush": true, 00:13:47.370 "reset": true, 00:13:47.370 "nvme_admin": false, 00:13:47.370 "nvme_io": false, 00:13:47.370 "nvme_io_md": false, 00:13:47.370 "write_zeroes": true, 00:13:47.370 "zcopy": true, 00:13:47.370 "get_zone_info": false, 00:13:47.370 "zone_management": false, 00:13:47.370 "zone_append": false, 00:13:47.370 "compare": false, 00:13:47.370 "compare_and_write": false, 00:13:47.370 "abort": true, 00:13:47.370 "seek_hole": false, 00:13:47.370 "seek_data": false, 00:13:47.370 "copy": true, 00:13:47.370 "nvme_iov_md": false 00:13:47.370 }, 00:13:47.370 "memory_domains": [ 00:13:47.370 { 00:13:47.370 "dma_device_id": "system", 00:13:47.370 "dma_device_type": 1 00:13:47.370 }, 00:13:47.370 { 00:13:47.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.370 "dma_device_type": 2 00:13:47.370 } 00:13:47.370 ], 00:13:47.370 "driver_specific": {} 00:13:47.370 } 00:13:47.370 ] 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.370 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.627 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.627 "name": "Existed_Raid", 00:13:47.627 "uuid": "3964d0c2-c542-4113-82ab-4ba929ceb99b", 00:13:47.627 "strip_size_kb": 0, 00:13:47.627 "state": "configuring", 00:13:47.627 "raid_level": "raid1", 00:13:47.627 "superblock": true, 00:13:47.627 "num_base_bdevs": 3, 00:13:47.627 "num_base_bdevs_discovered": 1, 00:13:47.627 "num_base_bdevs_operational": 3, 00:13:47.627 "base_bdevs_list": [ 00:13:47.627 { 00:13:47.627 "name": "BaseBdev1", 00:13:47.627 "uuid": "6cc40008-d9c2-4829-b751-3a974f1e7457", 00:13:47.627 "is_configured": true, 00:13:47.627 "data_offset": 2048, 00:13:47.627 "data_size": 63488 00:13:47.627 }, 00:13:47.627 { 00:13:47.627 "name": "BaseBdev2", 00:13:47.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.627 "is_configured": false, 00:13:47.627 "data_offset": 0, 00:13:47.627 "data_size": 0 00:13:47.627 }, 00:13:47.627 { 00:13:47.627 "name": "BaseBdev3", 00:13:47.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.627 "is_configured": false, 00:13:47.627 "data_offset": 0, 00:13:47.627 "data_size": 0 00:13:47.627 } 00:13:47.627 ] 00:13:47.627 }' 00:13:47.627 11:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.627 11:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.558 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:48.558 [2024-07-25 11:25:04.413025] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.558 [2024-07-25 11:25:04.413132] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:48.558 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:48.815 [2024-07-25 11:25:04.653156] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.815 [2024-07-25 11:25:04.655587] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.815 [2024-07-25 11:25:04.655685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.815 [2024-07-25 11:25:04.655706] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:48.815 [2024-07-25 11:25:04.655720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.815 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.072 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.072 "name": "Existed_Raid", 00:13:49.072 "uuid": "958c9c40-984b-46ef-837a-cc6287ffaa8b", 00:13:49.072 "strip_size_kb": 0, 00:13:49.072 "state": "configuring", 00:13:49.072 "raid_level": "raid1", 00:13:49.072 "superblock": true, 00:13:49.072 "num_base_bdevs": 3, 00:13:49.072 "num_base_bdevs_discovered": 1, 00:13:49.072 "num_base_bdevs_operational": 3, 00:13:49.072 "base_bdevs_list": [ 00:13:49.072 { 00:13:49.072 "name": "BaseBdev1", 00:13:49.072 "uuid": "6cc40008-d9c2-4829-b751-3a974f1e7457", 00:13:49.072 "is_configured": true, 00:13:49.072 "data_offset": 2048, 00:13:49.072 "data_size": 63488 00:13:49.072 }, 00:13:49.072 { 00:13:49.072 "name": "BaseBdev2", 00:13:49.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.073 "is_configured": false, 00:13:49.073 "data_offset": 0, 00:13:49.073 "data_size": 0 00:13:49.073 }, 00:13:49.073 { 00:13:49.073 "name": "BaseBdev3", 00:13:49.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.073 "is_configured": false, 00:13:49.073 "data_offset": 0, 00:13:49.073 "data_size": 0 00:13:49.073 } 00:13:49.073 ] 00:13:49.073 }' 00:13:49.073 11:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.073 11:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.004 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:50.004 [2024-07-25 11:25:05.883149] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.004 BaseBdev2 00:13:50.280 11:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:50.280 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:50.280 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:50.280 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:50.280 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:50.280 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:50.280 11:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:50.280 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:50.538 [ 00:13:50.538 { 00:13:50.538 "name": "BaseBdev2", 00:13:50.538 "aliases": [ 00:13:50.538 "3c10e97d-2069-4209-a8db-2ef456637bd6" 00:13:50.538 ], 00:13:50.538 "product_name": "Malloc disk", 00:13:50.538 "block_size": 512, 00:13:50.538 "num_blocks": 65536, 00:13:50.538 "uuid": "3c10e97d-2069-4209-a8db-2ef456637bd6", 00:13:50.538 "assigned_rate_limits": { 00:13:50.538 "rw_ios_per_sec": 0, 00:13:50.538 "rw_mbytes_per_sec": 0, 00:13:50.538 "r_mbytes_per_sec": 0, 00:13:50.538 "w_mbytes_per_sec": 0 00:13:50.538 }, 00:13:50.538 "claimed": true, 00:13:50.538 "claim_type": "exclusive_write", 00:13:50.538 "zoned": false, 00:13:50.538 "supported_io_types": { 00:13:50.538 "read": true, 00:13:50.538 "write": true, 00:13:50.538 "unmap": true, 00:13:50.538 "flush": true, 00:13:50.538 "reset": true, 00:13:50.538 "nvme_admin": false, 00:13:50.538 "nvme_io": false, 00:13:50.538 "nvme_io_md": false, 00:13:50.538 "write_zeroes": true, 00:13:50.538 "zcopy": true, 00:13:50.538 "get_zone_info": false, 00:13:50.538 "zone_management": false, 00:13:50.538 "zone_append": false, 00:13:50.538 "compare": false, 00:13:50.538 "compare_and_write": false, 00:13:50.538 "abort": true, 00:13:50.538 "seek_hole": false, 00:13:50.538 "seek_data": false, 00:13:50.538 "copy": true, 00:13:50.538 "nvme_iov_md": false 00:13:50.538 }, 00:13:50.538 "memory_domains": [ 00:13:50.538 { 00:13:50.538 "dma_device_id": "system", 00:13:50.538 "dma_device_type": 1 00:13:50.538 }, 00:13:50.538 { 00:13:50.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.538 "dma_device_type": 2 00:13:50.538 } 00:13:50.538 ], 00:13:50.538 "driver_specific": {} 00:13:50.538 } 00:13:50.538 ] 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.538 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.103 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:51.103 "name": "Existed_Raid", 00:13:51.103 "uuid": "958c9c40-984b-46ef-837a-cc6287ffaa8b", 00:13:51.103 "strip_size_kb": 0, 00:13:51.103 "state": "configuring", 00:13:51.103 "raid_level": "raid1", 00:13:51.103 "superblock": true, 00:13:51.103 "num_base_bdevs": 3, 00:13:51.103 "num_base_bdevs_discovered": 2, 00:13:51.103 "num_base_bdevs_operational": 3, 00:13:51.103 "base_bdevs_list": [ 00:13:51.103 { 00:13:51.103 "name": "BaseBdev1", 00:13:51.103 "uuid": "6cc40008-d9c2-4829-b751-3a974f1e7457", 00:13:51.103 "is_configured": true, 00:13:51.103 "data_offset": 2048, 00:13:51.103 "data_size": 63488 00:13:51.103 }, 00:13:51.103 { 00:13:51.103 "name": "BaseBdev2", 00:13:51.103 "uuid": "3c10e97d-2069-4209-a8db-2ef456637bd6", 00:13:51.103 "is_configured": true, 00:13:51.103 "data_offset": 2048, 00:13:51.103 "data_size": 63488 00:13:51.103 }, 00:13:51.104 { 00:13:51.104 "name": "BaseBdev3", 00:13:51.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.104 "is_configured": false, 00:13:51.104 "data_offset": 0, 00:13:51.104 "data_size": 0 00:13:51.104 } 00:13:51.104 ] 00:13:51.104 }' 00:13:51.104 11:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:51.104 11:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.688 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:51.944 [2024-07-25 11:25:07.653430] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.944 [2024-07-25 11:25:07.653868] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:51.944 [2024-07-25 11:25:07.653899] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:51.944 [2024-07-25 11:25:07.654251] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:51.944 [2024-07-25 11:25:07.654521] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:51.944 [2024-07-25 11:25:07.654557] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:51.944 BaseBdev3 00:13:51.944 [2024-07-25 11:25:07.654786] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.944 11:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:51.944 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:51.944 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:51.944 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:51.944 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:51.944 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:51.944 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:52.200 11:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.457 [ 00:13:52.457 { 00:13:52.457 "name": "BaseBdev3", 00:13:52.457 "aliases": [ 00:13:52.457 "354663a6-8ff4-4c42-911f-00cf96234a42" 00:13:52.457 ], 00:13:52.457 "product_name": "Malloc disk", 00:13:52.457 "block_size": 512, 00:13:52.457 "num_blocks": 65536, 00:13:52.457 "uuid": "354663a6-8ff4-4c42-911f-00cf96234a42", 00:13:52.457 "assigned_rate_limits": { 00:13:52.457 "rw_ios_per_sec": 0, 00:13:52.457 "rw_mbytes_per_sec": 0, 00:13:52.457 "r_mbytes_per_sec": 0, 00:13:52.457 "w_mbytes_per_sec": 0 00:13:52.457 }, 00:13:52.457 "claimed": true, 00:13:52.457 "claim_type": "exclusive_write", 00:13:52.457 "zoned": false, 00:13:52.457 "supported_io_types": { 00:13:52.457 "read": true, 00:13:52.457 "write": true, 00:13:52.457 "unmap": true, 00:13:52.457 "flush": true, 00:13:52.457 "reset": true, 00:13:52.457 "nvme_admin": false, 00:13:52.457 "nvme_io": false, 00:13:52.457 "nvme_io_md": false, 00:13:52.457 "write_zeroes": true, 00:13:52.457 "zcopy": true, 00:13:52.457 "get_zone_info": false, 00:13:52.457 "zone_management": false, 00:13:52.457 "zone_append": false, 00:13:52.457 "compare": false, 00:13:52.457 "compare_and_write": false, 00:13:52.457 "abort": true, 00:13:52.457 "seek_hole": false, 00:13:52.457 "seek_data": false, 00:13:52.457 "copy": true, 00:13:52.457 "nvme_iov_md": false 00:13:52.457 }, 00:13:52.457 "memory_domains": [ 00:13:52.457 { 00:13:52.457 "dma_device_id": "system", 00:13:52.457 "dma_device_type": 1 00:13:52.457 }, 00:13:52.457 { 00:13:52.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.457 "dma_device_type": 2 00:13:52.457 } 00:13:52.457 ], 00:13:52.457 "driver_specific": {} 00:13:52.457 } 00:13:52.457 ] 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.457 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.714 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:52.714 "name": "Existed_Raid", 00:13:52.714 "uuid": "958c9c40-984b-46ef-837a-cc6287ffaa8b", 00:13:52.714 "strip_size_kb": 0, 00:13:52.714 "state": "online", 00:13:52.714 "raid_level": "raid1", 00:13:52.714 "superblock": true, 00:13:52.714 "num_base_bdevs": 3, 00:13:52.714 "num_base_bdevs_discovered": 3, 00:13:52.714 "num_base_bdevs_operational": 3, 00:13:52.714 "base_bdevs_list": [ 00:13:52.714 { 00:13:52.714 "name": "BaseBdev1", 00:13:52.714 "uuid": "6cc40008-d9c2-4829-b751-3a974f1e7457", 00:13:52.714 "is_configured": true, 00:13:52.714 "data_offset": 2048, 00:13:52.714 "data_size": 63488 00:13:52.714 }, 00:13:52.714 { 00:13:52.714 "name": "BaseBdev2", 00:13:52.714 "uuid": "3c10e97d-2069-4209-a8db-2ef456637bd6", 00:13:52.714 "is_configured": true, 00:13:52.714 "data_offset": 2048, 00:13:52.714 "data_size": 63488 00:13:52.714 }, 00:13:52.714 { 00:13:52.714 "name": "BaseBdev3", 00:13:52.714 "uuid": "354663a6-8ff4-4c42-911f-00cf96234a42", 00:13:52.714 "is_configured": true, 00:13:52.714 "data_offset": 2048, 00:13:52.714 "data_size": 63488 00:13:52.714 } 00:13:52.714 ] 00:13:52.714 }' 00:13:52.714 11:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:52.714 11:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.279 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:53.279 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:53.279 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:53.279 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:53.279 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:53.279 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:53.279 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:53.279 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:53.537 [2024-07-25 11:25:09.238346] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.537 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:53.537 "name": "Existed_Raid", 00:13:53.537 "aliases": [ 00:13:53.537 "958c9c40-984b-46ef-837a-cc6287ffaa8b" 00:13:53.537 ], 00:13:53.537 "product_name": "Raid Volume", 00:13:53.537 "block_size": 512, 00:13:53.537 "num_blocks": 63488, 00:13:53.537 "uuid": "958c9c40-984b-46ef-837a-cc6287ffaa8b", 00:13:53.537 "assigned_rate_limits": { 00:13:53.537 "rw_ios_per_sec": 0, 00:13:53.537 "rw_mbytes_per_sec": 0, 00:13:53.537 "r_mbytes_per_sec": 0, 00:13:53.537 "w_mbytes_per_sec": 0 00:13:53.537 }, 00:13:53.537 "claimed": false, 00:13:53.537 "zoned": false, 00:13:53.537 "supported_io_types": { 00:13:53.537 "read": true, 00:13:53.537 "write": true, 00:13:53.537 "unmap": false, 00:13:53.537 "flush": false, 00:13:53.537 "reset": true, 00:13:53.537 "nvme_admin": false, 00:13:53.537 "nvme_io": false, 00:13:53.537 "nvme_io_md": false, 00:13:53.537 "write_zeroes": true, 00:13:53.537 "zcopy": false, 00:13:53.537 "get_zone_info": false, 00:13:53.537 "zone_management": false, 00:13:53.537 "zone_append": false, 00:13:53.537 "compare": false, 00:13:53.537 "compare_and_write": false, 00:13:53.537 "abort": false, 00:13:53.537 "seek_hole": false, 00:13:53.537 "seek_data": false, 00:13:53.537 "copy": false, 00:13:53.537 "nvme_iov_md": false 00:13:53.537 }, 00:13:53.537 "memory_domains": [ 00:13:53.537 { 00:13:53.537 "dma_device_id": "system", 00:13:53.537 "dma_device_type": 1 00:13:53.537 }, 00:13:53.537 { 00:13:53.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.537 "dma_device_type": 2 00:13:53.537 }, 00:13:53.537 { 00:13:53.537 "dma_device_id": "system", 00:13:53.537 "dma_device_type": 1 00:13:53.537 }, 00:13:53.537 { 00:13:53.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.537 "dma_device_type": 2 00:13:53.537 }, 00:13:53.537 { 00:13:53.537 "dma_device_id": "system", 00:13:53.537 "dma_device_type": 1 00:13:53.537 }, 00:13:53.537 { 00:13:53.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.537 "dma_device_type": 2 00:13:53.537 } 00:13:53.537 ], 00:13:53.537 "driver_specific": { 00:13:53.537 "raid": { 00:13:53.537 "uuid": "958c9c40-984b-46ef-837a-cc6287ffaa8b", 00:13:53.537 "strip_size_kb": 0, 00:13:53.537 "state": "online", 00:13:53.537 "raid_level": "raid1", 00:13:53.537 "superblock": true, 00:13:53.537 "num_base_bdevs": 3, 00:13:53.537 "num_base_bdevs_discovered": 3, 00:13:53.537 "num_base_bdevs_operational": 3, 00:13:53.537 "base_bdevs_list": [ 00:13:53.537 { 00:13:53.537 "name": "BaseBdev1", 00:13:53.537 "uuid": "6cc40008-d9c2-4829-b751-3a974f1e7457", 00:13:53.537 "is_configured": true, 00:13:53.537 "data_offset": 2048, 00:13:53.537 "data_size": 63488 00:13:53.537 }, 00:13:53.537 { 00:13:53.537 "name": "BaseBdev2", 00:13:53.537 "uuid": "3c10e97d-2069-4209-a8db-2ef456637bd6", 00:13:53.537 "is_configured": true, 00:13:53.537 "data_offset": 2048, 00:13:53.537 "data_size": 63488 00:13:53.537 }, 00:13:53.537 { 00:13:53.537 "name": "BaseBdev3", 00:13:53.537 "uuid": "354663a6-8ff4-4c42-911f-00cf96234a42", 00:13:53.537 "is_configured": true, 00:13:53.537 "data_offset": 2048, 00:13:53.537 "data_size": 63488 00:13:53.537 } 00:13:53.537 ] 00:13:53.537 } 00:13:53.537 } 00:13:53.537 }' 00:13:53.537 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:53.537 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:53.537 BaseBdev2 00:13:53.537 BaseBdev3' 00:13:53.537 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:53.537 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:53.537 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:53.795 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:53.795 "name": "BaseBdev1", 00:13:53.795 "aliases": [ 00:13:53.795 "6cc40008-d9c2-4829-b751-3a974f1e7457" 00:13:53.795 ], 00:13:53.795 "product_name": "Malloc disk", 00:13:53.795 "block_size": 512, 00:13:53.795 "num_blocks": 65536, 00:13:53.795 "uuid": "6cc40008-d9c2-4829-b751-3a974f1e7457", 00:13:53.795 "assigned_rate_limits": { 00:13:53.795 "rw_ios_per_sec": 0, 00:13:53.795 "rw_mbytes_per_sec": 0, 00:13:53.795 "r_mbytes_per_sec": 0, 00:13:53.795 "w_mbytes_per_sec": 0 00:13:53.795 }, 00:13:53.795 "claimed": true, 00:13:53.795 "claim_type": "exclusive_write", 00:13:53.795 "zoned": false, 00:13:53.795 "supported_io_types": { 00:13:53.795 "read": true, 00:13:53.795 "write": true, 00:13:53.795 "unmap": true, 00:13:53.795 "flush": true, 00:13:53.795 "reset": true, 00:13:53.795 "nvme_admin": false, 00:13:53.795 "nvme_io": false, 00:13:53.795 "nvme_io_md": false, 00:13:53.795 "write_zeroes": true, 00:13:53.795 "zcopy": true, 00:13:53.795 "get_zone_info": false, 00:13:53.795 "zone_management": false, 00:13:53.795 "zone_append": false, 00:13:53.795 "compare": false, 00:13:53.795 "compare_and_write": false, 00:13:53.795 "abort": true, 00:13:53.795 "seek_hole": false, 00:13:53.795 "seek_data": false, 00:13:53.795 "copy": true, 00:13:53.795 "nvme_iov_md": false 00:13:53.795 }, 00:13:53.795 "memory_domains": [ 00:13:53.795 { 00:13:53.795 "dma_device_id": "system", 00:13:53.795 "dma_device_type": 1 00:13:53.795 }, 00:13:53.795 { 00:13:53.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.795 "dma_device_type": 2 00:13:53.795 } 00:13:53.795 ], 00:13:53.795 "driver_specific": {} 00:13:53.795 }' 00:13:53.795 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:53.795 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:53.795 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:53.795 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:53.795 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.053 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:54.053 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:54.053 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:54.053 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:54.053 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:54.053 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:54.053 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:54.053 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:54.053 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:54.053 11:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:54.311 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:54.311 "name": "BaseBdev2", 00:13:54.311 "aliases": [ 00:13:54.311 "3c10e97d-2069-4209-a8db-2ef456637bd6" 00:13:54.311 ], 00:13:54.311 "product_name": "Malloc disk", 00:13:54.311 "block_size": 512, 00:13:54.311 "num_blocks": 65536, 00:13:54.311 "uuid": "3c10e97d-2069-4209-a8db-2ef456637bd6", 00:13:54.311 "assigned_rate_limits": { 00:13:54.311 "rw_ios_per_sec": 0, 00:13:54.311 "rw_mbytes_per_sec": 0, 00:13:54.311 "r_mbytes_per_sec": 0, 00:13:54.311 "w_mbytes_per_sec": 0 00:13:54.311 }, 00:13:54.311 "claimed": true, 00:13:54.311 "claim_type": "exclusive_write", 00:13:54.311 "zoned": false, 00:13:54.311 "supported_io_types": { 00:13:54.311 "read": true, 00:13:54.311 "write": true, 00:13:54.311 "unmap": true, 00:13:54.311 "flush": true, 00:13:54.311 "reset": true, 00:13:54.311 "nvme_admin": false, 00:13:54.311 "nvme_io": false, 00:13:54.311 "nvme_io_md": false, 00:13:54.311 "write_zeroes": true, 00:13:54.311 "zcopy": true, 00:13:54.311 "get_zone_info": false, 00:13:54.311 "zone_management": false, 00:13:54.311 "zone_append": false, 00:13:54.311 "compare": false, 00:13:54.311 "compare_and_write": false, 00:13:54.311 "abort": true, 00:13:54.311 "seek_hole": false, 00:13:54.311 "seek_data": false, 00:13:54.311 "copy": true, 00:13:54.311 "nvme_iov_md": false 00:13:54.311 }, 00:13:54.311 "memory_domains": [ 00:13:54.311 { 00:13:54.311 "dma_device_id": "system", 00:13:54.311 "dma_device_type": 1 00:13:54.311 }, 00:13:54.311 { 00:13:54.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.311 "dma_device_type": 2 00:13:54.311 } 00:13:54.311 ], 00:13:54.311 "driver_specific": {} 00:13:54.311 }' 00:13:54.311 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:54.568 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:54.568 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:54.568 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.568 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:54.568 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:54.568 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:54.568 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:54.855 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:54.855 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:54.855 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:54.855 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:54.855 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:54.855 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:54.855 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:55.113 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:55.113 "name": "BaseBdev3", 00:13:55.113 "aliases": [ 00:13:55.113 "354663a6-8ff4-4c42-911f-00cf96234a42" 00:13:55.113 ], 00:13:55.113 "product_name": "Malloc disk", 00:13:55.113 "block_size": 512, 00:13:55.113 "num_blocks": 65536, 00:13:55.113 "uuid": "354663a6-8ff4-4c42-911f-00cf96234a42", 00:13:55.113 "assigned_rate_limits": { 00:13:55.113 "rw_ios_per_sec": 0, 00:13:55.113 "rw_mbytes_per_sec": 0, 00:13:55.113 "r_mbytes_per_sec": 0, 00:13:55.113 "w_mbytes_per_sec": 0 00:13:55.113 }, 00:13:55.113 "claimed": true, 00:13:55.113 "claim_type": "exclusive_write", 00:13:55.113 "zoned": false, 00:13:55.113 "supported_io_types": { 00:13:55.113 "read": true, 00:13:55.113 "write": true, 00:13:55.113 "unmap": true, 00:13:55.113 "flush": true, 00:13:55.113 "reset": true, 00:13:55.113 "nvme_admin": false, 00:13:55.113 "nvme_io": false, 00:13:55.113 "nvme_io_md": false, 00:13:55.113 "write_zeroes": true, 00:13:55.113 "zcopy": true, 00:13:55.113 "get_zone_info": false, 00:13:55.113 "zone_management": false, 00:13:55.113 "zone_append": false, 00:13:55.113 "compare": false, 00:13:55.113 "compare_and_write": false, 00:13:55.113 "abort": true, 00:13:55.113 "seek_hole": false, 00:13:55.113 "seek_data": false, 00:13:55.113 "copy": true, 00:13:55.113 "nvme_iov_md": false 00:13:55.113 }, 00:13:55.113 "memory_domains": [ 00:13:55.113 { 00:13:55.113 "dma_device_id": "system", 00:13:55.113 "dma_device_type": 1 00:13:55.113 }, 00:13:55.113 { 00:13:55.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.113 "dma_device_type": 2 00:13:55.113 } 00:13:55.113 ], 00:13:55.113 "driver_specific": {} 00:13:55.113 }' 00:13:55.113 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:55.113 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:55.113 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:55.113 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:55.113 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:55.371 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:55.371 11:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:55.371 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:55.371 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:55.371 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:55.371 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:55.371 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:55.371 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:55.629 [2024-07-25 11:25:11.438733] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.888 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.147 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.147 "name": "Existed_Raid", 00:13:56.147 "uuid": "958c9c40-984b-46ef-837a-cc6287ffaa8b", 00:13:56.147 "strip_size_kb": 0, 00:13:56.147 "state": "online", 00:13:56.147 "raid_level": "raid1", 00:13:56.147 "superblock": true, 00:13:56.147 "num_base_bdevs": 3, 00:13:56.147 "num_base_bdevs_discovered": 2, 00:13:56.147 "num_base_bdevs_operational": 2, 00:13:56.147 "base_bdevs_list": [ 00:13:56.147 { 00:13:56.147 "name": null, 00:13:56.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.147 "is_configured": false, 00:13:56.147 "data_offset": 2048, 00:13:56.147 "data_size": 63488 00:13:56.147 }, 00:13:56.147 { 00:13:56.147 "name": "BaseBdev2", 00:13:56.147 "uuid": "3c10e97d-2069-4209-a8db-2ef456637bd6", 00:13:56.147 "is_configured": true, 00:13:56.147 "data_offset": 2048, 00:13:56.147 "data_size": 63488 00:13:56.147 }, 00:13:56.147 { 00:13:56.147 "name": "BaseBdev3", 00:13:56.147 "uuid": "354663a6-8ff4-4c42-911f-00cf96234a42", 00:13:56.147 "is_configured": true, 00:13:56.147 "data_offset": 2048, 00:13:56.147 "data_size": 63488 00:13:56.147 } 00:13:56.147 ] 00:13:56.147 }' 00:13:56.147 11:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.147 11:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.713 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:56.713 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:56.713 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.713 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:56.972 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:56.972 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.972 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:57.230 [2024-07-25 11:25:12.880888] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.230 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:57.230 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:57.230 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.230 11:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:57.488 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:57.488 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:57.488 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:57.745 [2024-07-25 11:25:13.474170] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:57.745 [2024-07-25 11:25:13.474349] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.745 [2024-07-25 11:25:13.559424] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.745 [2024-07-25 11:25:13.559514] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.745 [2024-07-25 11:25:13.559534] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:57.745 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:57.745 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:57.745 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.745 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:58.309 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:58.309 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:58.309 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:58.309 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:58.309 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:58.309 11:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:58.567 BaseBdev2 00:13:58.567 11:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:58.567 11:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:58.567 11:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.567 11:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:58.567 11:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.567 11:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.567 11:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:58.825 11:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:59.082 [ 00:13:59.082 { 00:13:59.082 "name": "BaseBdev2", 00:13:59.082 "aliases": [ 00:13:59.082 "37dc9f31-41fe-4f44-83ef-5a157780b1f9" 00:13:59.082 ], 00:13:59.082 "product_name": "Malloc disk", 00:13:59.082 "block_size": 512, 00:13:59.082 "num_blocks": 65536, 00:13:59.082 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:13:59.082 "assigned_rate_limits": { 00:13:59.082 "rw_ios_per_sec": 0, 00:13:59.082 "rw_mbytes_per_sec": 0, 00:13:59.082 "r_mbytes_per_sec": 0, 00:13:59.082 "w_mbytes_per_sec": 0 00:13:59.082 }, 00:13:59.082 "claimed": false, 00:13:59.082 "zoned": false, 00:13:59.082 "supported_io_types": { 00:13:59.082 "read": true, 00:13:59.082 "write": true, 00:13:59.082 "unmap": true, 00:13:59.082 "flush": true, 00:13:59.082 "reset": true, 00:13:59.082 "nvme_admin": false, 00:13:59.082 "nvme_io": false, 00:13:59.082 "nvme_io_md": false, 00:13:59.082 "write_zeroes": true, 00:13:59.082 "zcopy": true, 00:13:59.082 "get_zone_info": false, 00:13:59.082 "zone_management": false, 00:13:59.082 "zone_append": false, 00:13:59.082 "compare": false, 00:13:59.082 "compare_and_write": false, 00:13:59.082 "abort": true, 00:13:59.082 "seek_hole": false, 00:13:59.083 "seek_data": false, 00:13:59.083 "copy": true, 00:13:59.083 "nvme_iov_md": false 00:13:59.083 }, 00:13:59.083 "memory_domains": [ 00:13:59.083 { 00:13:59.083 "dma_device_id": "system", 00:13:59.083 "dma_device_type": 1 00:13:59.083 }, 00:13:59.083 { 00:13:59.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.083 "dma_device_type": 2 00:13:59.083 } 00:13:59.083 ], 00:13:59.083 "driver_specific": {} 00:13:59.083 } 00:13:59.083 ] 00:13:59.083 11:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:59.083 11:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:59.083 11:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:59.083 11:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:59.340 BaseBdev3 00:13:59.340 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:59.340 11:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:59.340 11:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:59.340 11:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:59.340 11:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:59.340 11:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:59.340 11:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.598 11:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:59.856 [ 00:13:59.857 { 00:13:59.857 "name": "BaseBdev3", 00:13:59.857 "aliases": [ 00:13:59.857 "75e9a52e-2d0f-495e-8bbb-998858e87d52" 00:13:59.857 ], 00:13:59.857 "product_name": "Malloc disk", 00:13:59.857 "block_size": 512, 00:13:59.857 "num_blocks": 65536, 00:13:59.857 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:13:59.857 "assigned_rate_limits": { 00:13:59.857 "rw_ios_per_sec": 0, 00:13:59.857 "rw_mbytes_per_sec": 0, 00:13:59.857 "r_mbytes_per_sec": 0, 00:13:59.857 "w_mbytes_per_sec": 0 00:13:59.857 }, 00:13:59.857 "claimed": false, 00:13:59.857 "zoned": false, 00:13:59.857 "supported_io_types": { 00:13:59.857 "read": true, 00:13:59.857 "write": true, 00:13:59.857 "unmap": true, 00:13:59.857 "flush": true, 00:13:59.857 "reset": true, 00:13:59.857 "nvme_admin": false, 00:13:59.857 "nvme_io": false, 00:13:59.857 "nvme_io_md": false, 00:13:59.857 "write_zeroes": true, 00:13:59.857 "zcopy": true, 00:13:59.857 "get_zone_info": false, 00:13:59.857 "zone_management": false, 00:13:59.857 "zone_append": false, 00:13:59.857 "compare": false, 00:13:59.857 "compare_and_write": false, 00:13:59.857 "abort": true, 00:13:59.857 "seek_hole": false, 00:13:59.857 "seek_data": false, 00:13:59.857 "copy": true, 00:13:59.857 "nvme_iov_md": false 00:13:59.857 }, 00:13:59.857 "memory_domains": [ 00:13:59.857 { 00:13:59.857 "dma_device_id": "system", 00:13:59.857 "dma_device_type": 1 00:13:59.857 }, 00:13:59.857 { 00:13:59.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.857 "dma_device_type": 2 00:13:59.857 } 00:13:59.857 ], 00:13:59.857 "driver_specific": {} 00:13:59.857 } 00:13:59.857 ] 00:13:59.857 11:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:59.857 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:59.857 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:59.857 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:00.114 [2024-07-25 11:25:15.749089] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.114 [2024-07-25 11:25:15.749160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.114 [2024-07-25 11:25:15.749227] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.114 [2024-07-25 11:25:15.751822] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.114 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:00.114 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.114 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.114 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:00.114 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:00.114 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:00.114 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.114 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.114 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.114 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.115 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.115 11:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.372 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.373 "name": "Existed_Raid", 00:14:00.373 "uuid": "27eb72e0-9917-4f4c-a522-f6796c2fa577", 00:14:00.373 "strip_size_kb": 0, 00:14:00.373 "state": "configuring", 00:14:00.373 "raid_level": "raid1", 00:14:00.373 "superblock": true, 00:14:00.373 "num_base_bdevs": 3, 00:14:00.373 "num_base_bdevs_discovered": 2, 00:14:00.373 "num_base_bdevs_operational": 3, 00:14:00.373 "base_bdevs_list": [ 00:14:00.373 { 00:14:00.373 "name": "BaseBdev1", 00:14:00.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.373 "is_configured": false, 00:14:00.373 "data_offset": 0, 00:14:00.373 "data_size": 0 00:14:00.373 }, 00:14:00.373 { 00:14:00.373 "name": "BaseBdev2", 00:14:00.373 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:14:00.373 "is_configured": true, 00:14:00.373 "data_offset": 2048, 00:14:00.373 "data_size": 63488 00:14:00.373 }, 00:14:00.373 { 00:14:00.373 "name": "BaseBdev3", 00:14:00.373 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:14:00.373 "is_configured": true, 00:14:00.373 "data_offset": 2048, 00:14:00.373 "data_size": 63488 00:14:00.373 } 00:14:00.373 ] 00:14:00.373 }' 00:14:00.373 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.373 11:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.938 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:01.196 [2024-07-25 11:25:16.921423] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.196 11:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.454 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.454 "name": "Existed_Raid", 00:14:01.454 "uuid": "27eb72e0-9917-4f4c-a522-f6796c2fa577", 00:14:01.454 "strip_size_kb": 0, 00:14:01.454 "state": "configuring", 00:14:01.454 "raid_level": "raid1", 00:14:01.454 "superblock": true, 00:14:01.454 "num_base_bdevs": 3, 00:14:01.454 "num_base_bdevs_discovered": 1, 00:14:01.454 "num_base_bdevs_operational": 3, 00:14:01.454 "base_bdevs_list": [ 00:14:01.454 { 00:14:01.454 "name": "BaseBdev1", 00:14:01.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.454 "is_configured": false, 00:14:01.454 "data_offset": 0, 00:14:01.454 "data_size": 0 00:14:01.454 }, 00:14:01.454 { 00:14:01.454 "name": null, 00:14:01.454 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:14:01.454 "is_configured": false, 00:14:01.454 "data_offset": 2048, 00:14:01.454 "data_size": 63488 00:14:01.454 }, 00:14:01.454 { 00:14:01.454 "name": "BaseBdev3", 00:14:01.454 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:14:01.454 "is_configured": true, 00:14:01.454 "data_offset": 2048, 00:14:01.454 "data_size": 63488 00:14:01.454 } 00:14:01.454 ] 00:14:01.454 }' 00:14:01.454 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.454 11:25:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.020 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.020 11:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:02.279 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:02.279 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:02.538 [2024-07-25 11:25:18.350154] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.538 BaseBdev1 00:14:02.538 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:02.538 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:02.538 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:02.538 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:02.538 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:02.538 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:02.538 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:02.795 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:03.054 [ 00:14:03.054 { 00:14:03.054 "name": "BaseBdev1", 00:14:03.054 "aliases": [ 00:14:03.054 "9372656f-33d2-40eb-a913-987d61bdc788" 00:14:03.054 ], 00:14:03.054 "product_name": "Malloc disk", 00:14:03.054 "block_size": 512, 00:14:03.054 "num_blocks": 65536, 00:14:03.054 "uuid": "9372656f-33d2-40eb-a913-987d61bdc788", 00:14:03.054 "assigned_rate_limits": { 00:14:03.054 "rw_ios_per_sec": 0, 00:14:03.054 "rw_mbytes_per_sec": 0, 00:14:03.054 "r_mbytes_per_sec": 0, 00:14:03.054 "w_mbytes_per_sec": 0 00:14:03.054 }, 00:14:03.054 "claimed": true, 00:14:03.054 "claim_type": "exclusive_write", 00:14:03.054 "zoned": false, 00:14:03.054 "supported_io_types": { 00:14:03.054 "read": true, 00:14:03.054 "write": true, 00:14:03.054 "unmap": true, 00:14:03.054 "flush": true, 00:14:03.054 "reset": true, 00:14:03.054 "nvme_admin": false, 00:14:03.054 "nvme_io": false, 00:14:03.054 "nvme_io_md": false, 00:14:03.054 "write_zeroes": true, 00:14:03.054 "zcopy": true, 00:14:03.054 "get_zone_info": false, 00:14:03.054 "zone_management": false, 00:14:03.054 "zone_append": false, 00:14:03.054 "compare": false, 00:14:03.054 "compare_and_write": false, 00:14:03.054 "abort": true, 00:14:03.054 "seek_hole": false, 00:14:03.054 "seek_data": false, 00:14:03.054 "copy": true, 00:14:03.054 "nvme_iov_md": false 00:14:03.054 }, 00:14:03.054 "memory_domains": [ 00:14:03.054 { 00:14:03.054 "dma_device_id": "system", 00:14:03.054 "dma_device_type": 1 00:14:03.054 }, 00:14:03.054 { 00:14:03.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.054 "dma_device_type": 2 00:14:03.054 } 00:14:03.054 ], 00:14:03.054 "driver_specific": {} 00:14:03.054 } 00:14:03.054 ] 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.054 11:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.312 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.312 "name": "Existed_Raid", 00:14:03.312 "uuid": "27eb72e0-9917-4f4c-a522-f6796c2fa577", 00:14:03.312 "strip_size_kb": 0, 00:14:03.312 "state": "configuring", 00:14:03.312 "raid_level": "raid1", 00:14:03.312 "superblock": true, 00:14:03.312 "num_base_bdevs": 3, 00:14:03.312 "num_base_bdevs_discovered": 2, 00:14:03.312 "num_base_bdevs_operational": 3, 00:14:03.312 "base_bdevs_list": [ 00:14:03.312 { 00:14:03.312 "name": "BaseBdev1", 00:14:03.312 "uuid": "9372656f-33d2-40eb-a913-987d61bdc788", 00:14:03.312 "is_configured": true, 00:14:03.312 "data_offset": 2048, 00:14:03.312 "data_size": 63488 00:14:03.312 }, 00:14:03.312 { 00:14:03.312 "name": null, 00:14:03.312 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:14:03.312 "is_configured": false, 00:14:03.312 "data_offset": 2048, 00:14:03.312 "data_size": 63488 00:14:03.312 }, 00:14:03.312 { 00:14:03.312 "name": "BaseBdev3", 00:14:03.312 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:14:03.312 "is_configured": true, 00:14:03.312 "data_offset": 2048, 00:14:03.312 "data_size": 63488 00:14:03.312 } 00:14:03.312 ] 00:14:03.312 }' 00:14:03.312 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.312 11:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.878 11:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:04.446 [2024-07-25 11:25:20.262789] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.446 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.704 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.704 "name": "Existed_Raid", 00:14:04.704 "uuid": "27eb72e0-9917-4f4c-a522-f6796c2fa577", 00:14:04.704 "strip_size_kb": 0, 00:14:04.704 "state": "configuring", 00:14:04.704 "raid_level": "raid1", 00:14:04.704 "superblock": true, 00:14:04.704 "num_base_bdevs": 3, 00:14:04.704 "num_base_bdevs_discovered": 1, 00:14:04.704 "num_base_bdevs_operational": 3, 00:14:04.704 "base_bdevs_list": [ 00:14:04.704 { 00:14:04.704 "name": "BaseBdev1", 00:14:04.704 "uuid": "9372656f-33d2-40eb-a913-987d61bdc788", 00:14:04.704 "is_configured": true, 00:14:04.704 "data_offset": 2048, 00:14:04.704 "data_size": 63488 00:14:04.704 }, 00:14:04.704 { 00:14:04.704 "name": null, 00:14:04.704 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:14:04.704 "is_configured": false, 00:14:04.704 "data_offset": 2048, 00:14:04.704 "data_size": 63488 00:14:04.704 }, 00:14:04.704 { 00:14:04.704 "name": null, 00:14:04.704 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:14:04.704 "is_configured": false, 00:14:04.704 "data_offset": 2048, 00:14:04.704 "data_size": 63488 00:14:04.704 } 00:14:04.704 ] 00:14:04.704 }' 00:14:04.704 11:25:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.704 11:25:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.671 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:05.671 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.671 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:05.671 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:05.932 [2024-07-25 11:25:21.759244] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.932 11:25:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.190 11:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.190 "name": "Existed_Raid", 00:14:06.190 "uuid": "27eb72e0-9917-4f4c-a522-f6796c2fa577", 00:14:06.190 "strip_size_kb": 0, 00:14:06.190 "state": "configuring", 00:14:06.190 "raid_level": "raid1", 00:14:06.190 "superblock": true, 00:14:06.190 "num_base_bdevs": 3, 00:14:06.190 "num_base_bdevs_discovered": 2, 00:14:06.190 "num_base_bdevs_operational": 3, 00:14:06.190 "base_bdevs_list": [ 00:14:06.190 { 00:14:06.190 "name": "BaseBdev1", 00:14:06.190 "uuid": "9372656f-33d2-40eb-a913-987d61bdc788", 00:14:06.190 "is_configured": true, 00:14:06.190 "data_offset": 2048, 00:14:06.190 "data_size": 63488 00:14:06.190 }, 00:14:06.190 { 00:14:06.190 "name": null, 00:14:06.190 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:14:06.190 "is_configured": false, 00:14:06.190 "data_offset": 2048, 00:14:06.190 "data_size": 63488 00:14:06.190 }, 00:14:06.190 { 00:14:06.190 "name": "BaseBdev3", 00:14:06.190 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:14:06.190 "is_configured": true, 00:14:06.190 "data_offset": 2048, 00:14:06.190 "data_size": 63488 00:14:06.190 } 00:14:06.190 ] 00:14:06.190 }' 00:14:06.190 11:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.190 11:25:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.123 11:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.123 11:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:07.123 11:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:07.123 11:25:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:07.381 [2024-07-25 11:25:23.243827] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.641 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.899 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.899 "name": "Existed_Raid", 00:14:07.899 "uuid": "27eb72e0-9917-4f4c-a522-f6796c2fa577", 00:14:07.899 "strip_size_kb": 0, 00:14:07.899 "state": "configuring", 00:14:07.899 "raid_level": "raid1", 00:14:07.899 "superblock": true, 00:14:07.899 "num_base_bdevs": 3, 00:14:07.899 "num_base_bdevs_discovered": 1, 00:14:07.899 "num_base_bdevs_operational": 3, 00:14:07.899 "base_bdevs_list": [ 00:14:07.899 { 00:14:07.899 "name": null, 00:14:07.899 "uuid": "9372656f-33d2-40eb-a913-987d61bdc788", 00:14:07.899 "is_configured": false, 00:14:07.899 "data_offset": 2048, 00:14:07.899 "data_size": 63488 00:14:07.899 }, 00:14:07.899 { 00:14:07.899 "name": null, 00:14:07.899 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:14:07.899 "is_configured": false, 00:14:07.899 "data_offset": 2048, 00:14:07.899 "data_size": 63488 00:14:07.899 }, 00:14:07.899 { 00:14:07.899 "name": "BaseBdev3", 00:14:07.899 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:14:07.899 "is_configured": true, 00:14:07.899 "data_offset": 2048, 00:14:07.899 "data_size": 63488 00:14:07.899 } 00:14:07.899 ] 00:14:07.899 }' 00:14:07.899 11:25:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.899 11:25:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.465 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.465 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:08.723 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:08.723 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:08.982 [2024-07-25 11:25:24.803777] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.982 11:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.240 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.240 "name": "Existed_Raid", 00:14:09.240 "uuid": "27eb72e0-9917-4f4c-a522-f6796c2fa577", 00:14:09.240 "strip_size_kb": 0, 00:14:09.240 "state": "configuring", 00:14:09.240 "raid_level": "raid1", 00:14:09.240 "superblock": true, 00:14:09.240 "num_base_bdevs": 3, 00:14:09.240 "num_base_bdevs_discovered": 2, 00:14:09.240 "num_base_bdevs_operational": 3, 00:14:09.240 "base_bdevs_list": [ 00:14:09.240 { 00:14:09.240 "name": null, 00:14:09.240 "uuid": "9372656f-33d2-40eb-a913-987d61bdc788", 00:14:09.240 "is_configured": false, 00:14:09.240 "data_offset": 2048, 00:14:09.240 "data_size": 63488 00:14:09.240 }, 00:14:09.240 { 00:14:09.240 "name": "BaseBdev2", 00:14:09.240 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:14:09.240 "is_configured": true, 00:14:09.240 "data_offset": 2048, 00:14:09.240 "data_size": 63488 00:14:09.240 }, 00:14:09.240 { 00:14:09.240 "name": "BaseBdev3", 00:14:09.240 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:14:09.240 "is_configured": true, 00:14:09.240 "data_offset": 2048, 00:14:09.240 "data_size": 63488 00:14:09.240 } 00:14:09.240 ] 00:14:09.240 }' 00:14:09.240 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.240 11:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.173 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:10.173 11:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.173 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:10.173 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:10.173 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.738 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9372656f-33d2-40eb-a913-987d61bdc788 00:14:10.995 [2024-07-25 11:25:26.677564] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:10.995 [2024-07-25 11:25:26.677943] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:10.995 [2024-07-25 11:25:26.677967] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:10.995 [2024-07-25 11:25:26.678274] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:10.995 [2024-07-25 11:25:26.678488] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:10.995 [2024-07-25 11:25:26.678505] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:10.995 [2024-07-25 11:25:26.678689] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.995 NewBaseBdev 00:14:10.995 11:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:10.995 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:10.995 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:10.995 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:10.995 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:10.995 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:10.995 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.253 11:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:11.511 [ 00:14:11.511 { 00:14:11.511 "name": "NewBaseBdev", 00:14:11.511 "aliases": [ 00:14:11.511 "9372656f-33d2-40eb-a913-987d61bdc788" 00:14:11.511 ], 00:14:11.511 "product_name": "Malloc disk", 00:14:11.511 "block_size": 512, 00:14:11.511 "num_blocks": 65536, 00:14:11.511 "uuid": "9372656f-33d2-40eb-a913-987d61bdc788", 00:14:11.511 "assigned_rate_limits": { 00:14:11.511 "rw_ios_per_sec": 0, 00:14:11.511 "rw_mbytes_per_sec": 0, 00:14:11.511 "r_mbytes_per_sec": 0, 00:14:11.511 "w_mbytes_per_sec": 0 00:14:11.511 }, 00:14:11.511 "claimed": true, 00:14:11.511 "claim_type": "exclusive_write", 00:14:11.511 "zoned": false, 00:14:11.511 "supported_io_types": { 00:14:11.511 "read": true, 00:14:11.511 "write": true, 00:14:11.511 "unmap": true, 00:14:11.511 "flush": true, 00:14:11.511 "reset": true, 00:14:11.511 "nvme_admin": false, 00:14:11.511 "nvme_io": false, 00:14:11.511 "nvme_io_md": false, 00:14:11.511 "write_zeroes": true, 00:14:11.511 "zcopy": true, 00:14:11.511 "get_zone_info": false, 00:14:11.511 "zone_management": false, 00:14:11.511 "zone_append": false, 00:14:11.511 "compare": false, 00:14:11.511 "compare_and_write": false, 00:14:11.511 "abort": true, 00:14:11.511 "seek_hole": false, 00:14:11.511 "seek_data": false, 00:14:11.511 "copy": true, 00:14:11.511 "nvme_iov_md": false 00:14:11.511 }, 00:14:11.511 "memory_domains": [ 00:14:11.511 { 00:14:11.511 "dma_device_id": "system", 00:14:11.511 "dma_device_type": 1 00:14:11.511 }, 00:14:11.511 { 00:14:11.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.511 "dma_device_type": 2 00:14:11.511 } 00:14:11.511 ], 00:14:11.511 "driver_specific": {} 00:14:11.511 } 00:14:11.511 ] 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.511 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.769 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.769 "name": "Existed_Raid", 00:14:11.769 "uuid": "27eb72e0-9917-4f4c-a522-f6796c2fa577", 00:14:11.769 "strip_size_kb": 0, 00:14:11.769 "state": "online", 00:14:11.769 "raid_level": "raid1", 00:14:11.769 "superblock": true, 00:14:11.769 "num_base_bdevs": 3, 00:14:11.769 "num_base_bdevs_discovered": 3, 00:14:11.769 "num_base_bdevs_operational": 3, 00:14:11.769 "base_bdevs_list": [ 00:14:11.769 { 00:14:11.769 "name": "NewBaseBdev", 00:14:11.769 "uuid": "9372656f-33d2-40eb-a913-987d61bdc788", 00:14:11.769 "is_configured": true, 00:14:11.769 "data_offset": 2048, 00:14:11.769 "data_size": 63488 00:14:11.769 }, 00:14:11.769 { 00:14:11.769 "name": "BaseBdev2", 00:14:11.769 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:14:11.769 "is_configured": true, 00:14:11.769 "data_offset": 2048, 00:14:11.769 "data_size": 63488 00:14:11.769 }, 00:14:11.769 { 00:14:11.769 "name": "BaseBdev3", 00:14:11.769 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:14:11.769 "is_configured": true, 00:14:11.769 "data_offset": 2048, 00:14:11.769 "data_size": 63488 00:14:11.769 } 00:14:11.769 ] 00:14:11.769 }' 00:14:11.769 11:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.769 11:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.334 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:12.334 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:12.334 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:12.334 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:12.334 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:12.334 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:12.334 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:12.334 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:12.592 [2024-07-25 11:25:28.366551] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.592 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:12.592 "name": "Existed_Raid", 00:14:12.592 "aliases": [ 00:14:12.592 "27eb72e0-9917-4f4c-a522-f6796c2fa577" 00:14:12.592 ], 00:14:12.592 "product_name": "Raid Volume", 00:14:12.592 "block_size": 512, 00:14:12.592 "num_blocks": 63488, 00:14:12.592 "uuid": "27eb72e0-9917-4f4c-a522-f6796c2fa577", 00:14:12.592 "assigned_rate_limits": { 00:14:12.592 "rw_ios_per_sec": 0, 00:14:12.592 "rw_mbytes_per_sec": 0, 00:14:12.592 "r_mbytes_per_sec": 0, 00:14:12.592 "w_mbytes_per_sec": 0 00:14:12.592 }, 00:14:12.592 "claimed": false, 00:14:12.592 "zoned": false, 00:14:12.592 "supported_io_types": { 00:14:12.592 "read": true, 00:14:12.592 "write": true, 00:14:12.592 "unmap": false, 00:14:12.592 "flush": false, 00:14:12.592 "reset": true, 00:14:12.592 "nvme_admin": false, 00:14:12.592 "nvme_io": false, 00:14:12.592 "nvme_io_md": false, 00:14:12.592 "write_zeroes": true, 00:14:12.592 "zcopy": false, 00:14:12.592 "get_zone_info": false, 00:14:12.592 "zone_management": false, 00:14:12.592 "zone_append": false, 00:14:12.592 "compare": false, 00:14:12.592 "compare_and_write": false, 00:14:12.592 "abort": false, 00:14:12.592 "seek_hole": false, 00:14:12.592 "seek_data": false, 00:14:12.592 "copy": false, 00:14:12.592 "nvme_iov_md": false 00:14:12.592 }, 00:14:12.592 "memory_domains": [ 00:14:12.592 { 00:14:12.592 "dma_device_id": "system", 00:14:12.592 "dma_device_type": 1 00:14:12.592 }, 00:14:12.592 { 00:14:12.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.592 "dma_device_type": 2 00:14:12.592 }, 00:14:12.592 { 00:14:12.592 "dma_device_id": "system", 00:14:12.592 "dma_device_type": 1 00:14:12.592 }, 00:14:12.592 { 00:14:12.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.592 "dma_device_type": 2 00:14:12.592 }, 00:14:12.592 { 00:14:12.592 "dma_device_id": "system", 00:14:12.592 "dma_device_type": 1 00:14:12.592 }, 00:14:12.592 { 00:14:12.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.592 "dma_device_type": 2 00:14:12.592 } 00:14:12.592 ], 00:14:12.592 "driver_specific": { 00:14:12.592 "raid": { 00:14:12.592 "uuid": "27eb72e0-9917-4f4c-a522-f6796c2fa577", 00:14:12.592 "strip_size_kb": 0, 00:14:12.592 "state": "online", 00:14:12.592 "raid_level": "raid1", 00:14:12.592 "superblock": true, 00:14:12.592 "num_base_bdevs": 3, 00:14:12.592 "num_base_bdevs_discovered": 3, 00:14:12.592 "num_base_bdevs_operational": 3, 00:14:12.592 "base_bdevs_list": [ 00:14:12.592 { 00:14:12.592 "name": "NewBaseBdev", 00:14:12.592 "uuid": "9372656f-33d2-40eb-a913-987d61bdc788", 00:14:12.592 "is_configured": true, 00:14:12.592 "data_offset": 2048, 00:14:12.592 "data_size": 63488 00:14:12.593 }, 00:14:12.593 { 00:14:12.593 "name": "BaseBdev2", 00:14:12.593 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:14:12.593 "is_configured": true, 00:14:12.593 "data_offset": 2048, 00:14:12.593 "data_size": 63488 00:14:12.593 }, 00:14:12.593 { 00:14:12.593 "name": "BaseBdev3", 00:14:12.593 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:14:12.593 "is_configured": true, 00:14:12.593 "data_offset": 2048, 00:14:12.593 "data_size": 63488 00:14:12.593 } 00:14:12.593 ] 00:14:12.593 } 00:14:12.593 } 00:14:12.593 }' 00:14:12.593 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.593 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:12.593 BaseBdev2 00:14:12.593 BaseBdev3' 00:14:12.593 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:12.593 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:12.593 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:12.850 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:12.850 "name": "NewBaseBdev", 00:14:12.850 "aliases": [ 00:14:12.850 "9372656f-33d2-40eb-a913-987d61bdc788" 00:14:12.850 ], 00:14:12.851 "product_name": "Malloc disk", 00:14:12.851 "block_size": 512, 00:14:12.851 "num_blocks": 65536, 00:14:12.851 "uuid": "9372656f-33d2-40eb-a913-987d61bdc788", 00:14:12.851 "assigned_rate_limits": { 00:14:12.851 "rw_ios_per_sec": 0, 00:14:12.851 "rw_mbytes_per_sec": 0, 00:14:12.851 "r_mbytes_per_sec": 0, 00:14:12.851 "w_mbytes_per_sec": 0 00:14:12.851 }, 00:14:12.851 "claimed": true, 00:14:12.851 "claim_type": "exclusive_write", 00:14:12.851 "zoned": false, 00:14:12.851 "supported_io_types": { 00:14:12.851 "read": true, 00:14:12.851 "write": true, 00:14:12.851 "unmap": true, 00:14:12.851 "flush": true, 00:14:12.851 "reset": true, 00:14:12.851 "nvme_admin": false, 00:14:12.851 "nvme_io": false, 00:14:12.851 "nvme_io_md": false, 00:14:12.851 "write_zeroes": true, 00:14:12.851 "zcopy": true, 00:14:12.851 "get_zone_info": false, 00:14:12.851 "zone_management": false, 00:14:12.851 "zone_append": false, 00:14:12.851 "compare": false, 00:14:12.851 "compare_and_write": false, 00:14:12.851 "abort": true, 00:14:12.851 "seek_hole": false, 00:14:12.851 "seek_data": false, 00:14:12.851 "copy": true, 00:14:12.851 "nvme_iov_md": false 00:14:12.851 }, 00:14:12.851 "memory_domains": [ 00:14:12.851 { 00:14:12.851 "dma_device_id": "system", 00:14:12.851 "dma_device_type": 1 00:14:12.851 }, 00:14:12.851 { 00:14:12.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.851 "dma_device_type": 2 00:14:12.851 } 00:14:12.851 ], 00:14:12.851 "driver_specific": {} 00:14:12.851 }' 00:14:12.851 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:13.118 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:13.118 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:13.118 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:13.118 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:13.118 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:13.118 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:13.118 11:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:13.376 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:13.376 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:13.376 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:13.376 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:13.376 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:13.376 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:13.376 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:13.633 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:13.633 "name": "BaseBdev2", 00:14:13.633 "aliases": [ 00:14:13.633 "37dc9f31-41fe-4f44-83ef-5a157780b1f9" 00:14:13.633 ], 00:14:13.633 "product_name": "Malloc disk", 00:14:13.633 "block_size": 512, 00:14:13.633 "num_blocks": 65536, 00:14:13.633 "uuid": "37dc9f31-41fe-4f44-83ef-5a157780b1f9", 00:14:13.633 "assigned_rate_limits": { 00:14:13.633 "rw_ios_per_sec": 0, 00:14:13.633 "rw_mbytes_per_sec": 0, 00:14:13.633 "r_mbytes_per_sec": 0, 00:14:13.633 "w_mbytes_per_sec": 0 00:14:13.633 }, 00:14:13.633 "claimed": true, 00:14:13.633 "claim_type": "exclusive_write", 00:14:13.633 "zoned": false, 00:14:13.634 "supported_io_types": { 00:14:13.634 "read": true, 00:14:13.634 "write": true, 00:14:13.634 "unmap": true, 00:14:13.634 "flush": true, 00:14:13.634 "reset": true, 00:14:13.634 "nvme_admin": false, 00:14:13.634 "nvme_io": false, 00:14:13.634 "nvme_io_md": false, 00:14:13.634 "write_zeroes": true, 00:14:13.634 "zcopy": true, 00:14:13.634 "get_zone_info": false, 00:14:13.634 "zone_management": false, 00:14:13.634 "zone_append": false, 00:14:13.634 "compare": false, 00:14:13.634 "compare_and_write": false, 00:14:13.634 "abort": true, 00:14:13.634 "seek_hole": false, 00:14:13.634 "seek_data": false, 00:14:13.634 "copy": true, 00:14:13.634 "nvme_iov_md": false 00:14:13.634 }, 00:14:13.634 "memory_domains": [ 00:14:13.634 { 00:14:13.634 "dma_device_id": "system", 00:14:13.634 "dma_device_type": 1 00:14:13.634 }, 00:14:13.634 { 00:14:13.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.634 "dma_device_type": 2 00:14:13.634 } 00:14:13.634 ], 00:14:13.634 "driver_specific": {} 00:14:13.634 }' 00:14:13.634 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:13.634 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:13.634 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:13.634 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:13.634 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:13.891 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:13.891 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:13.891 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:13.891 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:13.891 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:13.891 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:13.891 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:13.891 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:13.891 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:13.891 11:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:14.455 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:14.455 "name": "BaseBdev3", 00:14:14.455 "aliases": [ 00:14:14.455 "75e9a52e-2d0f-495e-8bbb-998858e87d52" 00:14:14.455 ], 00:14:14.455 "product_name": "Malloc disk", 00:14:14.455 "block_size": 512, 00:14:14.455 "num_blocks": 65536, 00:14:14.455 "uuid": "75e9a52e-2d0f-495e-8bbb-998858e87d52", 00:14:14.455 "assigned_rate_limits": { 00:14:14.455 "rw_ios_per_sec": 0, 00:14:14.455 "rw_mbytes_per_sec": 0, 00:14:14.455 "r_mbytes_per_sec": 0, 00:14:14.455 "w_mbytes_per_sec": 0 00:14:14.455 }, 00:14:14.455 "claimed": true, 00:14:14.455 "claim_type": "exclusive_write", 00:14:14.455 "zoned": false, 00:14:14.455 "supported_io_types": { 00:14:14.455 "read": true, 00:14:14.455 "write": true, 00:14:14.455 "unmap": true, 00:14:14.455 "flush": true, 00:14:14.455 "reset": true, 00:14:14.455 "nvme_admin": false, 00:14:14.455 "nvme_io": false, 00:14:14.455 "nvme_io_md": false, 00:14:14.455 "write_zeroes": true, 00:14:14.455 "zcopy": true, 00:14:14.455 "get_zone_info": false, 00:14:14.455 "zone_management": false, 00:14:14.455 "zone_append": false, 00:14:14.455 "compare": false, 00:14:14.455 "compare_and_write": false, 00:14:14.455 "abort": true, 00:14:14.455 "seek_hole": false, 00:14:14.455 "seek_data": false, 00:14:14.455 "copy": true, 00:14:14.455 "nvme_iov_md": false 00:14:14.455 }, 00:14:14.455 "memory_domains": [ 00:14:14.455 { 00:14:14.455 "dma_device_id": "system", 00:14:14.455 "dma_device_type": 1 00:14:14.455 }, 00:14:14.455 { 00:14:14.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.455 "dma_device_type": 2 00:14:14.455 } 00:14:14.455 ], 00:14:14.456 "driver_specific": {} 00:14:14.456 }' 00:14:14.456 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.456 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.456 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:14.456 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.456 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.456 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:14.456 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.456 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.713 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:14.713 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.713 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.713 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:14.713 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:14.971 [2024-07-25 11:25:30.702803] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:14.971 [2024-07-25 11:25:30.702854] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.971 [2024-07-25 11:25:30.702958] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.971 [2024-07-25 11:25:30.703319] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.971 [2024-07-25 11:25:30.703342] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 73879 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73879 ']' 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73879 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73879 00:14:14.971 killing process with pid 73879 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73879' 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73879 00:14:14.971 [2024-07-25 11:25:30.750350] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.971 11:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73879 00:14:15.229 [2024-07-25 11:25:31.012384] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:16.603 11:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:16.603 00:14:16.603 real 0m32.668s 00:14:16.603 user 0m59.813s 00:14:16.603 sys 0m4.264s 00:14:16.603 11:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.603 ************************************ 00:14:16.603 END TEST raid_state_function_test_sb 00:14:16.603 ************************************ 00:14:16.603 11:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.603 11:25:32 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:16.603 11:25:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:16.603 11:25:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.603 11:25:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.603 ************************************ 00:14:16.603 START TEST raid_superblock_test 00:14:16.603 ************************************ 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:14:16.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=74850 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 74850 /var/tmp/spdk-raid.sock 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74850 ']' 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.603 11:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.603 [2024-07-25 11:25:32.365506] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:14:16.603 [2024-07-25 11:25:32.366384] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74850 ] 00:14:16.863 [2024-07-25 11:25:32.546414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.124 [2024-07-25 11:25:32.783664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.124 [2024-07-25 11:25:32.987638] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.124 [2024-07-25 11:25:32.987710] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.690 11:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.690 11:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:17.690 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:14:17.690 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:17.690 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:14:17.690 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:14:17.690 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:17.690 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:17.691 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:17.691 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:17.691 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:17.691 malloc1 00:14:17.691 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:18.014 [2024-07-25 11:25:33.764614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:18.014 [2024-07-25 11:25:33.764771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.014 [2024-07-25 11:25:33.764801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:18.014 [2024-07-25 11:25:33.764818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.014 [2024-07-25 11:25:33.767667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.014 [2024-07-25 11:25:33.767714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:18.014 pt1 00:14:18.014 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:18.014 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:18.014 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:14:18.014 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:14:18.014 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:18.014 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:18.014 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:18.014 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:18.014 11:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:18.271 malloc2 00:14:18.271 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:18.529 [2024-07-25 11:25:34.296684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:18.529 [2024-07-25 11:25:34.296781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.529 [2024-07-25 11:25:34.296812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:18.529 [2024-07-25 11:25:34.296834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.529 [2024-07-25 11:25:34.299534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.529 [2024-07-25 11:25:34.299585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:18.529 pt2 00:14:18.529 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:18.529 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:18.529 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:14:18.529 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:14:18.529 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:18.529 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:18.529 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:18.529 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:18.529 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:18.786 malloc3 00:14:18.786 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:19.042 [2024-07-25 11:25:34.847920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:19.042 [2024-07-25 11:25:34.848041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.042 [2024-07-25 11:25:34.848072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:19.042 [2024-07-25 11:25:34.848089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.042 [2024-07-25 11:25:34.851081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.042 [2024-07-25 11:25:34.851145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:19.042 pt3 00:14:19.042 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:19.042 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:19.042 11:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:19.300 [2024-07-25 11:25:35.088142] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:19.300 [2024-07-25 11:25:35.090594] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:19.300 [2024-07-25 11:25:35.090730] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:19.300 [2024-07-25 11:25:35.090975] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:19.300 [2024-07-25 11:25:35.090994] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:19.300 [2024-07-25 11:25:35.091396] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:19.300 [2024-07-25 11:25:35.091632] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:19.300 [2024-07-25 11:25:35.091671] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:19.300 [2024-07-25 11:25:35.091884] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.300 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.559 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.559 "name": "raid_bdev1", 00:14:19.559 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:19.559 "strip_size_kb": 0, 00:14:19.559 "state": "online", 00:14:19.559 "raid_level": "raid1", 00:14:19.559 "superblock": true, 00:14:19.559 "num_base_bdevs": 3, 00:14:19.559 "num_base_bdevs_discovered": 3, 00:14:19.559 "num_base_bdevs_operational": 3, 00:14:19.559 "base_bdevs_list": [ 00:14:19.559 { 00:14:19.559 "name": "pt1", 00:14:19.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:19.559 "is_configured": true, 00:14:19.559 "data_offset": 2048, 00:14:19.559 "data_size": 63488 00:14:19.559 }, 00:14:19.559 { 00:14:19.559 "name": "pt2", 00:14:19.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.559 "is_configured": true, 00:14:19.559 "data_offset": 2048, 00:14:19.559 "data_size": 63488 00:14:19.559 }, 00:14:19.559 { 00:14:19.559 "name": "pt3", 00:14:19.559 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.559 "is_configured": true, 00:14:19.559 "data_offset": 2048, 00:14:19.559 "data_size": 63488 00:14:19.559 } 00:14:19.559 ] 00:14:19.559 }' 00:14:19.559 11:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.559 11:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.493 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:14:20.493 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:20.493 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:20.493 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:20.493 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:20.493 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:20.493 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:20.493 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:20.493 [2024-07-25 11:25:36.308912] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.493 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:20.493 "name": "raid_bdev1", 00:14:20.493 "aliases": [ 00:14:20.493 "5921d88e-bb6a-4582-adcc-e75070130970" 00:14:20.493 ], 00:14:20.493 "product_name": "Raid Volume", 00:14:20.493 "block_size": 512, 00:14:20.493 "num_blocks": 63488, 00:14:20.493 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:20.493 "assigned_rate_limits": { 00:14:20.493 "rw_ios_per_sec": 0, 00:14:20.493 "rw_mbytes_per_sec": 0, 00:14:20.493 "r_mbytes_per_sec": 0, 00:14:20.493 "w_mbytes_per_sec": 0 00:14:20.493 }, 00:14:20.493 "claimed": false, 00:14:20.493 "zoned": false, 00:14:20.493 "supported_io_types": { 00:14:20.493 "read": true, 00:14:20.493 "write": true, 00:14:20.493 "unmap": false, 00:14:20.493 "flush": false, 00:14:20.493 "reset": true, 00:14:20.493 "nvme_admin": false, 00:14:20.493 "nvme_io": false, 00:14:20.493 "nvme_io_md": false, 00:14:20.493 "write_zeroes": true, 00:14:20.493 "zcopy": false, 00:14:20.493 "get_zone_info": false, 00:14:20.493 "zone_management": false, 00:14:20.493 "zone_append": false, 00:14:20.493 "compare": false, 00:14:20.493 "compare_and_write": false, 00:14:20.493 "abort": false, 00:14:20.493 "seek_hole": false, 00:14:20.493 "seek_data": false, 00:14:20.493 "copy": false, 00:14:20.493 "nvme_iov_md": false 00:14:20.493 }, 00:14:20.493 "memory_domains": [ 00:14:20.493 { 00:14:20.493 "dma_device_id": "system", 00:14:20.493 "dma_device_type": 1 00:14:20.493 }, 00:14:20.493 { 00:14:20.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.493 "dma_device_type": 2 00:14:20.493 }, 00:14:20.493 { 00:14:20.493 "dma_device_id": "system", 00:14:20.493 "dma_device_type": 1 00:14:20.493 }, 00:14:20.493 { 00:14:20.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.493 "dma_device_type": 2 00:14:20.493 }, 00:14:20.493 { 00:14:20.493 "dma_device_id": "system", 00:14:20.493 "dma_device_type": 1 00:14:20.493 }, 00:14:20.493 { 00:14:20.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.493 "dma_device_type": 2 00:14:20.493 } 00:14:20.493 ], 00:14:20.493 "driver_specific": { 00:14:20.493 "raid": { 00:14:20.493 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:20.493 "strip_size_kb": 0, 00:14:20.493 "state": "online", 00:14:20.493 "raid_level": "raid1", 00:14:20.493 "superblock": true, 00:14:20.493 "num_base_bdevs": 3, 00:14:20.493 "num_base_bdevs_discovered": 3, 00:14:20.493 "num_base_bdevs_operational": 3, 00:14:20.493 "base_bdevs_list": [ 00:14:20.493 { 00:14:20.493 "name": "pt1", 00:14:20.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:20.493 "is_configured": true, 00:14:20.493 "data_offset": 2048, 00:14:20.493 "data_size": 63488 00:14:20.493 }, 00:14:20.493 { 00:14:20.493 "name": "pt2", 00:14:20.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.493 "is_configured": true, 00:14:20.493 "data_offset": 2048, 00:14:20.493 "data_size": 63488 00:14:20.494 }, 00:14:20.494 { 00:14:20.494 "name": "pt3", 00:14:20.494 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.494 "is_configured": true, 00:14:20.494 "data_offset": 2048, 00:14:20.494 "data_size": 63488 00:14:20.494 } 00:14:20.494 ] 00:14:20.494 } 00:14:20.494 } 00:14:20.494 }' 00:14:20.494 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:20.752 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:20.752 pt2 00:14:20.752 pt3' 00:14:20.752 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:20.752 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:20.752 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:21.012 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:21.012 "name": "pt1", 00:14:21.012 "aliases": [ 00:14:21.012 "00000000-0000-0000-0000-000000000001" 00:14:21.012 ], 00:14:21.012 "product_name": "passthru", 00:14:21.012 "block_size": 512, 00:14:21.012 "num_blocks": 65536, 00:14:21.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.012 "assigned_rate_limits": { 00:14:21.012 "rw_ios_per_sec": 0, 00:14:21.012 "rw_mbytes_per_sec": 0, 00:14:21.012 "r_mbytes_per_sec": 0, 00:14:21.012 "w_mbytes_per_sec": 0 00:14:21.012 }, 00:14:21.012 "claimed": true, 00:14:21.012 "claim_type": "exclusive_write", 00:14:21.012 "zoned": false, 00:14:21.012 "supported_io_types": { 00:14:21.012 "read": true, 00:14:21.012 "write": true, 00:14:21.012 "unmap": true, 00:14:21.012 "flush": true, 00:14:21.012 "reset": true, 00:14:21.012 "nvme_admin": false, 00:14:21.012 "nvme_io": false, 00:14:21.012 "nvme_io_md": false, 00:14:21.012 "write_zeroes": true, 00:14:21.012 "zcopy": true, 00:14:21.012 "get_zone_info": false, 00:14:21.012 "zone_management": false, 00:14:21.012 "zone_append": false, 00:14:21.012 "compare": false, 00:14:21.012 "compare_and_write": false, 00:14:21.012 "abort": true, 00:14:21.012 "seek_hole": false, 00:14:21.012 "seek_data": false, 00:14:21.012 "copy": true, 00:14:21.012 "nvme_iov_md": false 00:14:21.012 }, 00:14:21.012 "memory_domains": [ 00:14:21.012 { 00:14:21.012 "dma_device_id": "system", 00:14:21.012 "dma_device_type": 1 00:14:21.012 }, 00:14:21.012 { 00:14:21.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.012 "dma_device_type": 2 00:14:21.012 } 00:14:21.012 ], 00:14:21.012 "driver_specific": { 00:14:21.012 "passthru": { 00:14:21.012 "name": "pt1", 00:14:21.012 "base_bdev_name": "malloc1" 00:14:21.012 } 00:14:21.012 } 00:14:21.012 }' 00:14:21.012 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.012 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.012 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:21.012 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.012 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.012 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:21.012 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.012 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.271 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:21.271 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.271 11:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.271 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:21.271 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:21.271 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:21.271 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:21.530 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:21.530 "name": "pt2", 00:14:21.530 "aliases": [ 00:14:21.530 "00000000-0000-0000-0000-000000000002" 00:14:21.530 ], 00:14:21.530 "product_name": "passthru", 00:14:21.530 "block_size": 512, 00:14:21.530 "num_blocks": 65536, 00:14:21.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.530 "assigned_rate_limits": { 00:14:21.530 "rw_ios_per_sec": 0, 00:14:21.530 "rw_mbytes_per_sec": 0, 00:14:21.530 "r_mbytes_per_sec": 0, 00:14:21.530 "w_mbytes_per_sec": 0 00:14:21.530 }, 00:14:21.530 "claimed": true, 00:14:21.530 "claim_type": "exclusive_write", 00:14:21.530 "zoned": false, 00:14:21.530 "supported_io_types": { 00:14:21.530 "read": true, 00:14:21.530 "write": true, 00:14:21.530 "unmap": true, 00:14:21.530 "flush": true, 00:14:21.530 "reset": true, 00:14:21.530 "nvme_admin": false, 00:14:21.530 "nvme_io": false, 00:14:21.530 "nvme_io_md": false, 00:14:21.530 "write_zeroes": true, 00:14:21.530 "zcopy": true, 00:14:21.530 "get_zone_info": false, 00:14:21.530 "zone_management": false, 00:14:21.530 "zone_append": false, 00:14:21.530 "compare": false, 00:14:21.530 "compare_and_write": false, 00:14:21.530 "abort": true, 00:14:21.530 "seek_hole": false, 00:14:21.530 "seek_data": false, 00:14:21.530 "copy": true, 00:14:21.530 "nvme_iov_md": false 00:14:21.530 }, 00:14:21.530 "memory_domains": [ 00:14:21.530 { 00:14:21.530 "dma_device_id": "system", 00:14:21.530 "dma_device_type": 1 00:14:21.530 }, 00:14:21.530 { 00:14:21.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.530 "dma_device_type": 2 00:14:21.530 } 00:14:21.530 ], 00:14:21.530 "driver_specific": { 00:14:21.530 "passthru": { 00:14:21.530 "name": "pt2", 00:14:21.530 "base_bdev_name": "malloc2" 00:14:21.530 } 00:14:21.530 } 00:14:21.530 }' 00:14:21.530 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.530 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.530 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:21.530 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:21.789 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:22.356 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:22.356 "name": "pt3", 00:14:22.356 "aliases": [ 00:14:22.356 "00000000-0000-0000-0000-000000000003" 00:14:22.356 ], 00:14:22.356 "product_name": "passthru", 00:14:22.356 "block_size": 512, 00:14:22.356 "num_blocks": 65536, 00:14:22.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.356 "assigned_rate_limits": { 00:14:22.356 "rw_ios_per_sec": 0, 00:14:22.356 "rw_mbytes_per_sec": 0, 00:14:22.356 "r_mbytes_per_sec": 0, 00:14:22.356 "w_mbytes_per_sec": 0 00:14:22.356 }, 00:14:22.356 "claimed": true, 00:14:22.356 "claim_type": "exclusive_write", 00:14:22.356 "zoned": false, 00:14:22.356 "supported_io_types": { 00:14:22.356 "read": true, 00:14:22.356 "write": true, 00:14:22.356 "unmap": true, 00:14:22.356 "flush": true, 00:14:22.356 "reset": true, 00:14:22.356 "nvme_admin": false, 00:14:22.356 "nvme_io": false, 00:14:22.356 "nvme_io_md": false, 00:14:22.356 "write_zeroes": true, 00:14:22.356 "zcopy": true, 00:14:22.356 "get_zone_info": false, 00:14:22.356 "zone_management": false, 00:14:22.356 "zone_append": false, 00:14:22.356 "compare": false, 00:14:22.356 "compare_and_write": false, 00:14:22.356 "abort": true, 00:14:22.356 "seek_hole": false, 00:14:22.356 "seek_data": false, 00:14:22.356 "copy": true, 00:14:22.356 "nvme_iov_md": false 00:14:22.356 }, 00:14:22.356 "memory_domains": [ 00:14:22.356 { 00:14:22.356 "dma_device_id": "system", 00:14:22.356 "dma_device_type": 1 00:14:22.356 }, 00:14:22.356 { 00:14:22.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.356 "dma_device_type": 2 00:14:22.356 } 00:14:22.356 ], 00:14:22.356 "driver_specific": { 00:14:22.356 "passthru": { 00:14:22.356 "name": "pt3", 00:14:22.356 "base_bdev_name": "malloc3" 00:14:22.356 } 00:14:22.356 } 00:14:22.356 }' 00:14:22.356 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.356 11:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.356 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:22.357 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.357 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.357 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:22.357 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.357 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.615 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:22.615 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.615 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.615 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:22.615 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:22.615 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:14:22.872 [2024-07-25 11:25:38.641597] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.872 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=5921d88e-bb6a-4582-adcc-e75070130970 00:14:22.872 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 5921d88e-bb6a-4582-adcc-e75070130970 ']' 00:14:22.872 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:23.131 [2024-07-25 11:25:38.869273] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:23.131 [2024-07-25 11:25:38.869313] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.131 [2024-07-25 11:25:38.869411] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.131 [2024-07-25 11:25:38.869502] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.131 [2024-07-25 11:25:38.869521] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:23.131 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.131 11:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:14:23.390 11:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:14:23.390 11:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:14:23.390 11:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:23.390 11:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:23.649 11:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:23.649 11:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:23.906 11:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:23.906 11:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:24.163 11:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:24.163 11:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:24.421 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:24.679 [2024-07-25 11:25:40.481668] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:24.679 [2024-07-25 11:25:40.484163] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:24.679 [2024-07-25 11:25:40.484244] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:24.679 [2024-07-25 11:25:40.484341] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:24.679 [2024-07-25 11:25:40.484443] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:24.679 [2024-07-25 11:25:40.484484] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:24.679 [2024-07-25 11:25:40.484507] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.679 [2024-07-25 11:25:40.484527] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:24.679 request: 00:14:24.679 { 00:14:24.679 "name": "raid_bdev1", 00:14:24.679 "raid_level": "raid1", 00:14:24.679 "base_bdevs": [ 00:14:24.679 "malloc1", 00:14:24.679 "malloc2", 00:14:24.679 "malloc3" 00:14:24.679 ], 00:14:24.679 "superblock": false, 00:14:24.679 "method": "bdev_raid_create", 00:14:24.679 "req_id": 1 00:14:24.679 } 00:14:24.679 Got JSON-RPC error response 00:14:24.679 response: 00:14:24.679 { 00:14:24.679 "code": -17, 00:14:24.679 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:24.679 } 00:14:24.679 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:24.679 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:24.679 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:24.679 11:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:24.679 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.679 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:14:24.937 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:14:24.937 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:14:24.937 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:25.195 [2024-07-25 11:25:40.969850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:25.195 [2024-07-25 11:25:40.970004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.195 [2024-07-25 11:25:40.970056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:25.195 [2024-07-25 11:25:40.970088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.195 [2024-07-25 11:25:40.974069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.195 [2024-07-25 11:25:40.974149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:25.195 [2024-07-25 11:25:40.974341] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:25.195 [2024-07-25 11:25:40.974464] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:25.195 pt1 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.195 11:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.453 11:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.453 "name": "raid_bdev1", 00:14:25.453 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:25.453 "strip_size_kb": 0, 00:14:25.453 "state": "configuring", 00:14:25.453 "raid_level": "raid1", 00:14:25.453 "superblock": true, 00:14:25.453 "num_base_bdevs": 3, 00:14:25.453 "num_base_bdevs_discovered": 1, 00:14:25.453 "num_base_bdevs_operational": 3, 00:14:25.453 "base_bdevs_list": [ 00:14:25.453 { 00:14:25.453 "name": "pt1", 00:14:25.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:25.453 "is_configured": true, 00:14:25.453 "data_offset": 2048, 00:14:25.453 "data_size": 63488 00:14:25.453 }, 00:14:25.453 { 00:14:25.453 "name": null, 00:14:25.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.453 "is_configured": false, 00:14:25.453 "data_offset": 2048, 00:14:25.453 "data_size": 63488 00:14:25.453 }, 00:14:25.453 { 00:14:25.453 "name": null, 00:14:25.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.453 "is_configured": false, 00:14:25.453 "data_offset": 2048, 00:14:25.453 "data_size": 63488 00:14:25.453 } 00:14:25.453 ] 00:14:25.453 }' 00:14:25.453 11:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.453 11:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.404 11:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:14:26.404 11:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:26.404 [2024-07-25 11:25:42.222768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:26.404 [2024-07-25 11:25:42.222858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.404 [2024-07-25 11:25:42.222890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:26.404 [2024-07-25 11:25:42.222909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.404 [2024-07-25 11:25:42.223468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.404 [2024-07-25 11:25:42.223501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.404 [2024-07-25 11:25:42.223646] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:26.405 [2024-07-25 11:25:42.223691] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.405 pt2 00:14:26.405 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:26.663 [2024-07-25 11:25:42.498928] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.663 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.922 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.922 "name": "raid_bdev1", 00:14:26.922 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:26.922 "strip_size_kb": 0, 00:14:26.922 "state": "configuring", 00:14:26.922 "raid_level": "raid1", 00:14:26.922 "superblock": true, 00:14:26.922 "num_base_bdevs": 3, 00:14:26.922 "num_base_bdevs_discovered": 1, 00:14:26.922 "num_base_bdevs_operational": 3, 00:14:26.922 "base_bdevs_list": [ 00:14:26.922 { 00:14:26.922 "name": "pt1", 00:14:26.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.922 "is_configured": true, 00:14:26.922 "data_offset": 2048, 00:14:26.922 "data_size": 63488 00:14:26.922 }, 00:14:26.922 { 00:14:26.922 "name": null, 00:14:26.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.922 "is_configured": false, 00:14:26.922 "data_offset": 2048, 00:14:26.922 "data_size": 63488 00:14:26.922 }, 00:14:26.922 { 00:14:26.922 "name": null, 00:14:26.922 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.922 "is_configured": false, 00:14:26.922 "data_offset": 2048, 00:14:26.922 "data_size": 63488 00:14:26.922 } 00:14:26.922 ] 00:14:26.922 }' 00:14:26.922 11:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.922 11:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.857 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:14:27.857 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:27.857 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:27.857 [2024-07-25 11:25:43.703308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:27.857 [2024-07-25 11:25:43.703409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.857 [2024-07-25 11:25:43.703443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:27.857 [2024-07-25 11:25:43.703459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.857 [2024-07-25 11:25:43.704070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.857 [2024-07-25 11:25:43.704105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:27.857 [2024-07-25 11:25:43.704226] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:27.857 [2024-07-25 11:25:43.704268] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:27.857 pt2 00:14:27.857 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:27.857 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:27.857 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:28.116 [2024-07-25 11:25:43.947449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:28.116 [2024-07-25 11:25:43.947575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.116 [2024-07-25 11:25:43.947618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:28.116 [2024-07-25 11:25:43.947654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.116 [2024-07-25 11:25:43.948234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.116 [2024-07-25 11:25:43.948277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:28.116 [2024-07-25 11:25:43.948400] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:28.116 [2024-07-25 11:25:43.948447] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:28.116 [2024-07-25 11:25:43.948656] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:28.116 [2024-07-25 11:25:43.948673] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:28.116 [2024-07-25 11:25:43.948980] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:28.116 [2024-07-25 11:25:43.949192] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:28.116 [2024-07-25 11:25:43.949231] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:28.116 [2024-07-25 11:25:43.949390] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.116 pt3 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.116 11:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.375 11:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.375 "name": "raid_bdev1", 00:14:28.375 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:28.375 "strip_size_kb": 0, 00:14:28.375 "state": "online", 00:14:28.375 "raid_level": "raid1", 00:14:28.375 "superblock": true, 00:14:28.375 "num_base_bdevs": 3, 00:14:28.375 "num_base_bdevs_discovered": 3, 00:14:28.375 "num_base_bdevs_operational": 3, 00:14:28.375 "base_bdevs_list": [ 00:14:28.375 { 00:14:28.375 "name": "pt1", 00:14:28.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.375 "is_configured": true, 00:14:28.375 "data_offset": 2048, 00:14:28.375 "data_size": 63488 00:14:28.375 }, 00:14:28.375 { 00:14:28.375 "name": "pt2", 00:14:28.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.375 "is_configured": true, 00:14:28.375 "data_offset": 2048, 00:14:28.375 "data_size": 63488 00:14:28.375 }, 00:14:28.375 { 00:14:28.375 "name": "pt3", 00:14:28.375 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.375 "is_configured": true, 00:14:28.375 "data_offset": 2048, 00:14:28.375 "data_size": 63488 00:14:28.375 } 00:14:28.375 ] 00:14:28.375 }' 00:14:28.375 11:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.375 11:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.308 11:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:14:29.308 11:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:29.308 11:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:29.308 11:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:29.308 11:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:29.308 11:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:29.308 11:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:29.308 11:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:29.308 [2024-07-25 11:25:45.136200] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.308 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:29.308 "name": "raid_bdev1", 00:14:29.308 "aliases": [ 00:14:29.308 "5921d88e-bb6a-4582-adcc-e75070130970" 00:14:29.308 ], 00:14:29.308 "product_name": "Raid Volume", 00:14:29.308 "block_size": 512, 00:14:29.308 "num_blocks": 63488, 00:14:29.308 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:29.308 "assigned_rate_limits": { 00:14:29.308 "rw_ios_per_sec": 0, 00:14:29.308 "rw_mbytes_per_sec": 0, 00:14:29.308 "r_mbytes_per_sec": 0, 00:14:29.308 "w_mbytes_per_sec": 0 00:14:29.308 }, 00:14:29.308 "claimed": false, 00:14:29.308 "zoned": false, 00:14:29.308 "supported_io_types": { 00:14:29.308 "read": true, 00:14:29.308 "write": true, 00:14:29.308 "unmap": false, 00:14:29.308 "flush": false, 00:14:29.308 "reset": true, 00:14:29.308 "nvme_admin": false, 00:14:29.308 "nvme_io": false, 00:14:29.308 "nvme_io_md": false, 00:14:29.308 "write_zeroes": true, 00:14:29.308 "zcopy": false, 00:14:29.308 "get_zone_info": false, 00:14:29.308 "zone_management": false, 00:14:29.308 "zone_append": false, 00:14:29.308 "compare": false, 00:14:29.308 "compare_and_write": false, 00:14:29.308 "abort": false, 00:14:29.308 "seek_hole": false, 00:14:29.308 "seek_data": false, 00:14:29.308 "copy": false, 00:14:29.308 "nvme_iov_md": false 00:14:29.308 }, 00:14:29.308 "memory_domains": [ 00:14:29.308 { 00:14:29.308 "dma_device_id": "system", 00:14:29.308 "dma_device_type": 1 00:14:29.308 }, 00:14:29.308 { 00:14:29.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.308 "dma_device_type": 2 00:14:29.308 }, 00:14:29.308 { 00:14:29.308 "dma_device_id": "system", 00:14:29.308 "dma_device_type": 1 00:14:29.308 }, 00:14:29.308 { 00:14:29.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.308 "dma_device_type": 2 00:14:29.308 }, 00:14:29.308 { 00:14:29.308 "dma_device_id": "system", 00:14:29.308 "dma_device_type": 1 00:14:29.308 }, 00:14:29.308 { 00:14:29.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.308 "dma_device_type": 2 00:14:29.308 } 00:14:29.308 ], 00:14:29.308 "driver_specific": { 00:14:29.308 "raid": { 00:14:29.308 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:29.308 "strip_size_kb": 0, 00:14:29.308 "state": "online", 00:14:29.308 "raid_level": "raid1", 00:14:29.308 "superblock": true, 00:14:29.308 "num_base_bdevs": 3, 00:14:29.308 "num_base_bdevs_discovered": 3, 00:14:29.308 "num_base_bdevs_operational": 3, 00:14:29.308 "base_bdevs_list": [ 00:14:29.308 { 00:14:29.308 "name": "pt1", 00:14:29.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.308 "is_configured": true, 00:14:29.308 "data_offset": 2048, 00:14:29.308 "data_size": 63488 00:14:29.308 }, 00:14:29.308 { 00:14:29.308 "name": "pt2", 00:14:29.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.308 "is_configured": true, 00:14:29.308 "data_offset": 2048, 00:14:29.308 "data_size": 63488 00:14:29.308 }, 00:14:29.308 { 00:14:29.308 "name": "pt3", 00:14:29.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.308 "is_configured": true, 00:14:29.308 "data_offset": 2048, 00:14:29.308 "data_size": 63488 00:14:29.308 } 00:14:29.308 ] 00:14:29.308 } 00:14:29.308 } 00:14:29.308 }' 00:14:29.308 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.566 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:29.566 pt2 00:14:29.566 pt3' 00:14:29.566 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:29.566 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:29.566 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:29.823 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:29.823 "name": "pt1", 00:14:29.823 "aliases": [ 00:14:29.823 "00000000-0000-0000-0000-000000000001" 00:14:29.823 ], 00:14:29.823 "product_name": "passthru", 00:14:29.823 "block_size": 512, 00:14:29.823 "num_blocks": 65536, 00:14:29.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.823 "assigned_rate_limits": { 00:14:29.823 "rw_ios_per_sec": 0, 00:14:29.823 "rw_mbytes_per_sec": 0, 00:14:29.823 "r_mbytes_per_sec": 0, 00:14:29.823 "w_mbytes_per_sec": 0 00:14:29.823 }, 00:14:29.823 "claimed": true, 00:14:29.823 "claim_type": "exclusive_write", 00:14:29.823 "zoned": false, 00:14:29.823 "supported_io_types": { 00:14:29.823 "read": true, 00:14:29.823 "write": true, 00:14:29.823 "unmap": true, 00:14:29.823 "flush": true, 00:14:29.823 "reset": true, 00:14:29.823 "nvme_admin": false, 00:14:29.823 "nvme_io": false, 00:14:29.823 "nvme_io_md": false, 00:14:29.823 "write_zeroes": true, 00:14:29.823 "zcopy": true, 00:14:29.823 "get_zone_info": false, 00:14:29.823 "zone_management": false, 00:14:29.823 "zone_append": false, 00:14:29.823 "compare": false, 00:14:29.823 "compare_and_write": false, 00:14:29.823 "abort": true, 00:14:29.823 "seek_hole": false, 00:14:29.823 "seek_data": false, 00:14:29.823 "copy": true, 00:14:29.823 "nvme_iov_md": false 00:14:29.823 }, 00:14:29.823 "memory_domains": [ 00:14:29.823 { 00:14:29.823 "dma_device_id": "system", 00:14:29.823 "dma_device_type": 1 00:14:29.823 }, 00:14:29.823 { 00:14:29.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.823 "dma_device_type": 2 00:14:29.823 } 00:14:29.823 ], 00:14:29.823 "driver_specific": { 00:14:29.823 "passthru": { 00:14:29.823 "name": "pt1", 00:14:29.823 "base_bdev_name": "malloc1" 00:14:29.823 } 00:14:29.823 } 00:14:29.823 }' 00:14:29.823 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.823 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.823 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:29.823 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.823 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.823 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:29.823 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.081 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.081 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.081 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.081 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.081 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.081 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.081 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:30.081 11:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.338 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.338 "name": "pt2", 00:14:30.338 "aliases": [ 00:14:30.338 "00000000-0000-0000-0000-000000000002" 00:14:30.338 ], 00:14:30.338 "product_name": "passthru", 00:14:30.338 "block_size": 512, 00:14:30.338 "num_blocks": 65536, 00:14:30.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.338 "assigned_rate_limits": { 00:14:30.338 "rw_ios_per_sec": 0, 00:14:30.338 "rw_mbytes_per_sec": 0, 00:14:30.338 "r_mbytes_per_sec": 0, 00:14:30.338 "w_mbytes_per_sec": 0 00:14:30.338 }, 00:14:30.338 "claimed": true, 00:14:30.338 "claim_type": "exclusive_write", 00:14:30.338 "zoned": false, 00:14:30.338 "supported_io_types": { 00:14:30.338 "read": true, 00:14:30.338 "write": true, 00:14:30.338 "unmap": true, 00:14:30.338 "flush": true, 00:14:30.338 "reset": true, 00:14:30.338 "nvme_admin": false, 00:14:30.338 "nvme_io": false, 00:14:30.338 "nvme_io_md": false, 00:14:30.338 "write_zeroes": true, 00:14:30.338 "zcopy": true, 00:14:30.338 "get_zone_info": false, 00:14:30.338 "zone_management": false, 00:14:30.338 "zone_append": false, 00:14:30.338 "compare": false, 00:14:30.338 "compare_and_write": false, 00:14:30.338 "abort": true, 00:14:30.338 "seek_hole": false, 00:14:30.338 "seek_data": false, 00:14:30.338 "copy": true, 00:14:30.338 "nvme_iov_md": false 00:14:30.338 }, 00:14:30.338 "memory_domains": [ 00:14:30.338 { 00:14:30.338 "dma_device_id": "system", 00:14:30.338 "dma_device_type": 1 00:14:30.338 }, 00:14:30.338 { 00:14:30.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.338 "dma_device_type": 2 00:14:30.338 } 00:14:30.338 ], 00:14:30.338 "driver_specific": { 00:14:30.338 "passthru": { 00:14:30.338 "name": "pt2", 00:14:30.338 "base_bdev_name": "malloc2" 00:14:30.338 } 00:14:30.338 } 00:14:30.338 }' 00:14:30.338 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.338 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.338 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.338 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.596 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.596 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.596 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.596 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.596 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.596 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.596 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.854 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.854 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.854 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:30.854 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:31.112 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:31.112 "name": "pt3", 00:14:31.112 "aliases": [ 00:14:31.112 "00000000-0000-0000-0000-000000000003" 00:14:31.112 ], 00:14:31.112 "product_name": "passthru", 00:14:31.112 "block_size": 512, 00:14:31.112 "num_blocks": 65536, 00:14:31.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.112 "assigned_rate_limits": { 00:14:31.112 "rw_ios_per_sec": 0, 00:14:31.112 "rw_mbytes_per_sec": 0, 00:14:31.112 "r_mbytes_per_sec": 0, 00:14:31.112 "w_mbytes_per_sec": 0 00:14:31.112 }, 00:14:31.112 "claimed": true, 00:14:31.112 "claim_type": "exclusive_write", 00:14:31.112 "zoned": false, 00:14:31.112 "supported_io_types": { 00:14:31.112 "read": true, 00:14:31.112 "write": true, 00:14:31.112 "unmap": true, 00:14:31.112 "flush": true, 00:14:31.112 "reset": true, 00:14:31.112 "nvme_admin": false, 00:14:31.112 "nvme_io": false, 00:14:31.112 "nvme_io_md": false, 00:14:31.112 "write_zeroes": true, 00:14:31.112 "zcopy": true, 00:14:31.112 "get_zone_info": false, 00:14:31.112 "zone_management": false, 00:14:31.112 "zone_append": false, 00:14:31.112 "compare": false, 00:14:31.112 "compare_and_write": false, 00:14:31.112 "abort": true, 00:14:31.112 "seek_hole": false, 00:14:31.112 "seek_data": false, 00:14:31.112 "copy": true, 00:14:31.112 "nvme_iov_md": false 00:14:31.112 }, 00:14:31.112 "memory_domains": [ 00:14:31.112 { 00:14:31.112 "dma_device_id": "system", 00:14:31.112 "dma_device_type": 1 00:14:31.112 }, 00:14:31.112 { 00:14:31.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.112 "dma_device_type": 2 00:14:31.112 } 00:14:31.112 ], 00:14:31.112 "driver_specific": { 00:14:31.112 "passthru": { 00:14:31.112 "name": "pt3", 00:14:31.112 "base_bdev_name": "malloc3" 00:14:31.112 } 00:14:31.112 } 00:14:31.112 }' 00:14:31.112 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.112 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.112 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:31.112 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.112 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.112 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:31.112 11:25:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.370 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.370 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.370 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.370 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.370 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.370 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:14:31.370 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:31.628 [2024-07-25 11:25:47.424880] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.628 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 5921d88e-bb6a-4582-adcc-e75070130970 '!=' 5921d88e-bb6a-4582-adcc-e75070130970 ']' 00:14:31.628 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:14:31.628 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:31.628 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:31.628 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:31.886 [2024-07-25 11:25:47.723059] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.886 11:25:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.454 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.454 "name": "raid_bdev1", 00:14:32.454 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:32.454 "strip_size_kb": 0, 00:14:32.454 "state": "online", 00:14:32.454 "raid_level": "raid1", 00:14:32.454 "superblock": true, 00:14:32.454 "num_base_bdevs": 3, 00:14:32.454 "num_base_bdevs_discovered": 2, 00:14:32.454 "num_base_bdevs_operational": 2, 00:14:32.454 "base_bdevs_list": [ 00:14:32.454 { 00:14:32.454 "name": null, 00:14:32.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.454 "is_configured": false, 00:14:32.454 "data_offset": 2048, 00:14:32.454 "data_size": 63488 00:14:32.454 }, 00:14:32.454 { 00:14:32.454 "name": "pt2", 00:14:32.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.454 "is_configured": true, 00:14:32.454 "data_offset": 2048, 00:14:32.454 "data_size": 63488 00:14:32.454 }, 00:14:32.454 { 00:14:32.454 "name": "pt3", 00:14:32.454 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.454 "is_configured": true, 00:14:32.454 "data_offset": 2048, 00:14:32.454 "data_size": 63488 00:14:32.454 } 00:14:32.454 ] 00:14:32.454 }' 00:14:32.454 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.454 11:25:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.021 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:33.279 [2024-07-25 11:25:48.959380] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:33.279 [2024-07-25 11:25:48.959441] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.279 [2024-07-25 11:25:48.959548] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.279 [2024-07-25 11:25:48.959630] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.279 [2024-07-25 11:25:48.959646] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:33.279 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.279 11:25:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:14:33.538 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:14:33.538 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:14:33.538 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:14:33.538 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:14:33.538 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:33.797 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:33.797 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:14:33.797 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:34.081 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:34.081 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:14:34.081 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:14:34.081 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:14:34.081 11:25:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:34.380 [2024-07-25 11:25:50.131672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:34.380 [2024-07-25 11:25:50.131762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.380 [2024-07-25 11:25:50.131800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:34.380 [2024-07-25 11:25:50.131816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.380 [2024-07-25 11:25:50.134592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.380 [2024-07-25 11:25:50.134649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:34.380 [2024-07-25 11:25:50.134761] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:34.380 [2024-07-25 11:25:50.134831] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:34.380 pt2 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.380 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.639 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:34.639 "name": "raid_bdev1", 00:14:34.639 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:34.639 "strip_size_kb": 0, 00:14:34.639 "state": "configuring", 00:14:34.639 "raid_level": "raid1", 00:14:34.639 "superblock": true, 00:14:34.639 "num_base_bdevs": 3, 00:14:34.639 "num_base_bdevs_discovered": 1, 00:14:34.639 "num_base_bdevs_operational": 2, 00:14:34.639 "base_bdevs_list": [ 00:14:34.639 { 00:14:34.639 "name": null, 00:14:34.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.639 "is_configured": false, 00:14:34.639 "data_offset": 2048, 00:14:34.639 "data_size": 63488 00:14:34.639 }, 00:14:34.639 { 00:14:34.639 "name": "pt2", 00:14:34.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.639 "is_configured": true, 00:14:34.639 "data_offset": 2048, 00:14:34.639 "data_size": 63488 00:14:34.639 }, 00:14:34.639 { 00:14:34.639 "name": null, 00:14:34.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:34.639 "is_configured": false, 00:14:34.639 "data_offset": 2048, 00:14:34.639 "data_size": 63488 00:14:34.639 } 00:14:34.639 ] 00:14:34.639 }' 00:14:34.639 11:25:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:34.639 11:25:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:35.575 [2024-07-25 11:25:51.396040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:35.575 [2024-07-25 11:25:51.396117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.575 [2024-07-25 11:25:51.396152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:35.575 [2024-07-25 11:25:51.396169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.575 [2024-07-25 11:25:51.396797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.575 [2024-07-25 11:25:51.396823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:35.575 [2024-07-25 11:25:51.396936] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:35.575 [2024-07-25 11:25:51.396976] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:35.575 [2024-07-25 11:25:51.397143] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:35.575 [2024-07-25 11:25:51.397193] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:35.575 [2024-07-25 11:25:51.397499] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:35.575 [2024-07-25 11:25:51.397713] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:35.575 [2024-07-25 11:25:51.397745] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:35.575 [2024-07-25 11:25:51.397928] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.575 pt3 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.575 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.833 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.833 "name": "raid_bdev1", 00:14:35.833 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:35.833 "strip_size_kb": 0, 00:14:35.833 "state": "online", 00:14:35.833 "raid_level": "raid1", 00:14:35.833 "superblock": true, 00:14:35.833 "num_base_bdevs": 3, 00:14:35.833 "num_base_bdevs_discovered": 2, 00:14:35.833 "num_base_bdevs_operational": 2, 00:14:35.833 "base_bdevs_list": [ 00:14:35.833 { 00:14:35.833 "name": null, 00:14:35.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.833 "is_configured": false, 00:14:35.833 "data_offset": 2048, 00:14:35.833 "data_size": 63488 00:14:35.833 }, 00:14:35.833 { 00:14:35.833 "name": "pt2", 00:14:35.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.834 "is_configured": true, 00:14:35.834 "data_offset": 2048, 00:14:35.834 "data_size": 63488 00:14:35.834 }, 00:14:35.834 { 00:14:35.834 "name": "pt3", 00:14:35.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.834 "is_configured": true, 00:14:35.834 "data_offset": 2048, 00:14:35.834 "data_size": 63488 00:14:35.834 } 00:14:35.834 ] 00:14:35.834 }' 00:14:35.834 11:25:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.834 11:25:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.770 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:37.029 [2024-07-25 11:25:52.748464] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.029 [2024-07-25 11:25:52.748525] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.029 [2024-07-25 11:25:52.748624] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.029 [2024-07-25 11:25:52.748752] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.029 [2024-07-25 11:25:52.748790] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:37.029 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.029 11:25:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:14:37.287 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:14:37.287 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:14:37.287 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:14:37.287 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:14:37.287 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:37.544 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:37.802 [2024-07-25 11:25:53.560756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:37.802 [2024-07-25 11:25:53.560881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.802 [2024-07-25 11:25:53.560915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:37.802 [2024-07-25 11:25:53.560933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.802 [2024-07-25 11:25:53.563695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.802 [2024-07-25 11:25:53.563747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:37.802 [2024-07-25 11:25:53.563854] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:37.802 [2024-07-25 11:25:53.563919] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:37.802 [2024-07-25 11:25:53.564104] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:37.802 [2024-07-25 11:25:53.564131] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.802 [2024-07-25 11:25:53.564157] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:37.802 [2024-07-25 11:25:53.564223] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.802 pt1 00:14:37.802 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:14:37.802 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:37.802 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:37.802 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:37.802 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:37.802 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:37.802 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:37.803 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.803 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.803 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.803 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.803 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.803 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.073 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.073 "name": "raid_bdev1", 00:14:38.073 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:38.073 "strip_size_kb": 0, 00:14:38.073 "state": "configuring", 00:14:38.073 "raid_level": "raid1", 00:14:38.073 "superblock": true, 00:14:38.073 "num_base_bdevs": 3, 00:14:38.073 "num_base_bdevs_discovered": 1, 00:14:38.073 "num_base_bdevs_operational": 2, 00:14:38.073 "base_bdevs_list": [ 00:14:38.073 { 00:14:38.073 "name": null, 00:14:38.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.073 "is_configured": false, 00:14:38.073 "data_offset": 2048, 00:14:38.073 "data_size": 63488 00:14:38.073 }, 00:14:38.073 { 00:14:38.073 "name": "pt2", 00:14:38.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.073 "is_configured": true, 00:14:38.073 "data_offset": 2048, 00:14:38.073 "data_size": 63488 00:14:38.073 }, 00:14:38.073 { 00:14:38.073 "name": null, 00:14:38.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.073 "is_configured": false, 00:14:38.073 "data_offset": 2048, 00:14:38.073 "data_size": 63488 00:14:38.073 } 00:14:38.073 ] 00:14:38.073 }' 00:14:38.073 11:25:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.073 11:25:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.019 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:14:39.019 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:39.019 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:14:39.019 11:25:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:39.277 [2024-07-25 11:25:55.097209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:39.277 [2024-07-25 11:25:55.097310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.277 [2024-07-25 11:25:55.097338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:39.277 [2024-07-25 11:25:55.097354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.277 [2024-07-25 11:25:55.097918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.277 [2024-07-25 11:25:55.097973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:39.277 [2024-07-25 11:25:55.098073] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:39.277 [2024-07-25 11:25:55.098117] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:39.277 [2024-07-25 11:25:55.098272] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:39.277 [2024-07-25 11:25:55.098302] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:39.277 [2024-07-25 11:25:55.098746] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:39.277 [2024-07-25 11:25:55.098952] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:39.277 [2024-07-25 11:25:55.098975] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:39.277 [2024-07-25 11:25:55.099136] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.277 pt3 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.277 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.536 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.536 "name": "raid_bdev1", 00:14:39.536 "uuid": "5921d88e-bb6a-4582-adcc-e75070130970", 00:14:39.536 "strip_size_kb": 0, 00:14:39.536 "state": "online", 00:14:39.536 "raid_level": "raid1", 00:14:39.536 "superblock": true, 00:14:39.536 "num_base_bdevs": 3, 00:14:39.536 "num_base_bdevs_discovered": 2, 00:14:39.536 "num_base_bdevs_operational": 2, 00:14:39.536 "base_bdevs_list": [ 00:14:39.536 { 00:14:39.536 "name": null, 00:14:39.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.536 "is_configured": false, 00:14:39.536 "data_offset": 2048, 00:14:39.536 "data_size": 63488 00:14:39.536 }, 00:14:39.536 { 00:14:39.536 "name": "pt2", 00:14:39.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.536 "is_configured": true, 00:14:39.536 "data_offset": 2048, 00:14:39.536 "data_size": 63488 00:14:39.536 }, 00:14:39.536 { 00:14:39.536 "name": "pt3", 00:14:39.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.536 "is_configured": true, 00:14:39.536 "data_offset": 2048, 00:14:39.536 "data_size": 63488 00:14:39.536 } 00:14:39.536 ] 00:14:39.536 }' 00:14:39.536 11:25:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.536 11:25:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.469 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:40.469 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:40.469 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:14:40.469 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:40.469 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:14:40.728 [2024-07-25 11:25:56.477960] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 5921d88e-bb6a-4582-adcc-e75070130970 '!=' 5921d88e-bb6a-4582-adcc-e75070130970 ']' 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 74850 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74850 ']' 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74850 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74850 00:14:40.728 killing process with pid 74850 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74850' 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74850 00:14:40.728 [2024-07-25 11:25:56.519661] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.728 [2024-07-25 11:25:56.519780] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.728 11:25:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74850 00:14:40.728 [2024-07-25 11:25:56.519861] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.728 [2024-07-25 11:25:56.519877] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:40.985 [2024-07-25 11:25:56.782785] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.362 11:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:14:42.362 00:14:42.362 real 0m25.672s 00:14:42.362 user 0m46.836s 00:14:42.362 sys 0m3.327s 00:14:42.362 11:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.362 ************************************ 00:14:42.362 END TEST raid_superblock_test 00:14:42.362 ************************************ 00:14:42.362 11:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.362 11:25:57 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:42.362 11:25:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:42.362 11:25:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.362 11:25:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.362 ************************************ 00:14:42.362 START TEST raid_read_error_test 00:14:42.362 ************************************ 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:14:42.362 11:25:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:42.362 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.YnCR9UcIti 00:14:42.362 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=75593 00:14:42.362 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 75593 /var/tmp/spdk-raid.sock 00:14:42.362 11:25:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:42.362 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75593 ']' 00:14:42.362 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:42.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:42.362 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.362 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:42.362 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.362 11:25:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.362 [2024-07-25 11:25:58.103324] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:14:42.362 [2024-07-25 11:25:58.103503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75593 ] 00:14:42.622 [2024-07-25 11:25:58.274348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.881 [2024-07-25 11:25:58.524978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.881 [2024-07-25 11:25:58.729069] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.881 [2024-07-25 11:25:58.729131] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.448 11:25:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.448 11:25:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:43.448 11:25:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:43.448 11:25:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:43.448 BaseBdev1_malloc 00:14:43.706 11:25:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:43.706 true 00:14:43.706 11:25:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:43.965 [2024-07-25 11:25:59.801487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:43.965 [2024-07-25 11:25:59.801592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.965 [2024-07-25 11:25:59.801670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:43.965 [2024-07-25 11:25:59.801699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.965 [2024-07-25 11:25:59.804932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.965 [2024-07-25 11:25:59.804982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:43.965 BaseBdev1 00:14:43.965 11:25:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:43.965 11:25:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.224 BaseBdev2_malloc 00:14:44.482 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:44.482 true 00:14:44.742 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:44.742 [2024-07-25 11:26:00.587036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:44.742 [2024-07-25 11:26:00.587354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.742 [2024-07-25 11:26:00.587594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:44.742 [2024-07-25 11:26:00.587809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.742 [2024-07-25 11:26:00.590914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.742 [2024-07-25 11:26:00.591094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.742 BaseBdev2 00:14:44.742 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:44.742 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:45.001 BaseBdev3_malloc 00:14:45.001 11:26:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:45.569 true 00:14:45.569 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:45.569 [2024-07-25 11:26:01.377422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:45.569 [2024-07-25 11:26:01.377522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.569 [2024-07-25 11:26:01.377579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:45.569 [2024-07-25 11:26:01.377605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.569 [2024-07-25 11:26:01.380669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.570 [2024-07-25 11:26:01.380717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:45.570 BaseBdev3 00:14:45.570 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:45.828 [2024-07-25 11:26:01.673647] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.828 [2024-07-25 11:26:01.676099] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.828 [2024-07-25 11:26:01.676214] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.828 [2024-07-25 11:26:01.676523] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:45.828 [2024-07-25 11:26:01.676549] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:45.828 [2024-07-25 11:26:01.677001] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:45.828 [2024-07-25 11:26:01.677285] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:45.828 [2024-07-25 11:26:01.677304] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:45.828 [2024-07-25 11:26:01.677583] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.828 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.087 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.087 "name": "raid_bdev1", 00:14:46.087 "uuid": "15e9c2ec-19b3-4978-9d0f-c6946706ba9f", 00:14:46.087 "strip_size_kb": 0, 00:14:46.087 "state": "online", 00:14:46.087 "raid_level": "raid1", 00:14:46.087 "superblock": true, 00:14:46.087 "num_base_bdevs": 3, 00:14:46.087 "num_base_bdevs_discovered": 3, 00:14:46.087 "num_base_bdevs_operational": 3, 00:14:46.087 "base_bdevs_list": [ 00:14:46.087 { 00:14:46.087 "name": "BaseBdev1", 00:14:46.087 "uuid": "01af3f08-4df5-5c8c-8ba8-64b2776b7a94", 00:14:46.087 "is_configured": true, 00:14:46.087 "data_offset": 2048, 00:14:46.087 "data_size": 63488 00:14:46.087 }, 00:14:46.087 { 00:14:46.087 "name": "BaseBdev2", 00:14:46.087 "uuid": "f801b8a1-8f62-5388-b8ef-579a7f3465cd", 00:14:46.087 "is_configured": true, 00:14:46.087 "data_offset": 2048, 00:14:46.087 "data_size": 63488 00:14:46.087 }, 00:14:46.087 { 00:14:46.087 "name": "BaseBdev3", 00:14:46.087 "uuid": "464bdd34-a459-5c4e-acbc-012d0081d54f", 00:14:46.087 "is_configured": true, 00:14:46.087 "data_offset": 2048, 00:14:46.087 "data_size": 63488 00:14:46.087 } 00:14:46.087 ] 00:14:46.087 }' 00:14:46.087 11:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.087 11:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.024 11:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:47.024 11:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:47.024 [2024-07-25 11:26:02.807456] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:47.959 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:48.217 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:48.217 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:48.217 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:14:48.217 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:14:48.217 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:48.217 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:48.217 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:48.218 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:48.218 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:48.218 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:48.218 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.218 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.218 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.218 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.218 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.218 11:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.477 11:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.477 "name": "raid_bdev1", 00:14:48.477 "uuid": "15e9c2ec-19b3-4978-9d0f-c6946706ba9f", 00:14:48.477 "strip_size_kb": 0, 00:14:48.477 "state": "online", 00:14:48.477 "raid_level": "raid1", 00:14:48.477 "superblock": true, 00:14:48.477 "num_base_bdevs": 3, 00:14:48.477 "num_base_bdevs_discovered": 3, 00:14:48.477 "num_base_bdevs_operational": 3, 00:14:48.477 "base_bdevs_list": [ 00:14:48.477 { 00:14:48.477 "name": "BaseBdev1", 00:14:48.477 "uuid": "01af3f08-4df5-5c8c-8ba8-64b2776b7a94", 00:14:48.477 "is_configured": true, 00:14:48.477 "data_offset": 2048, 00:14:48.477 "data_size": 63488 00:14:48.477 }, 00:14:48.477 { 00:14:48.477 "name": "BaseBdev2", 00:14:48.477 "uuid": "f801b8a1-8f62-5388-b8ef-579a7f3465cd", 00:14:48.477 "is_configured": true, 00:14:48.477 "data_offset": 2048, 00:14:48.477 "data_size": 63488 00:14:48.477 }, 00:14:48.477 { 00:14:48.477 "name": "BaseBdev3", 00:14:48.477 "uuid": "464bdd34-a459-5c4e-acbc-012d0081d54f", 00:14:48.477 "is_configured": true, 00:14:48.477 "data_offset": 2048, 00:14:48.477 "data_size": 63488 00:14:48.477 } 00:14:48.477 ] 00:14:48.477 }' 00:14:48.477 11:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.477 11:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.413 11:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:49.413 [2024-07-25 11:26:05.151265] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:49.413 [2024-07-25 11:26:05.151576] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.413 [2024-07-25 11:26:05.154785] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.413 [2024-07-25 11:26:05.154845] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.413 [2024-07-25 11:26:05.154970] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.413 [2024-07-25 11:26:05.155004] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:49.413 0 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 75593 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75593 ']' 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75593 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75593 00:14:49.413 killing process with pid 75593 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75593' 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75593 00:14:49.413 11:26:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75593 00:14:49.413 [2024-07-25 11:26:05.188767] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.671 [2024-07-25 11:26:05.403146] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.046 11:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.YnCR9UcIti 00:14:51.046 11:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:51.046 11:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:51.046 11:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:14:51.046 11:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:14:51.046 ************************************ 00:14:51.046 END TEST raid_read_error_test 00:14:51.046 ************************************ 00:14:51.046 11:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:51.046 11:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:51.046 11:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:51.046 00:14:51.046 real 0m8.665s 00:14:51.046 user 0m13.221s 00:14:51.046 sys 0m1.072s 00:14:51.046 11:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.046 11:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.046 11:26:06 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:51.046 11:26:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:51.046 11:26:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.046 11:26:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:51.046 ************************************ 00:14:51.046 START TEST raid_write_error_test 00:14:51.046 ************************************ 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.K6j488VyFU 00:14:51.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=75790 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 75790 /var/tmp/spdk-raid.sock 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75790 ']' 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.046 11:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:51.047 11:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.047 11:26:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.047 [2024-07-25 11:26:06.808897] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:14:51.047 [2024-07-25 11:26:06.809078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75790 ] 00:14:51.305 [2024-07-25 11:26:06.983866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.565 [2024-07-25 11:26:07.228945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.565 [2024-07-25 11:26:07.439512] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.565 [2024-07-25 11:26:07.439595] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.131 11:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.131 11:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:52.131 11:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:52.131 11:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:52.389 BaseBdev1_malloc 00:14:52.389 11:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:52.648 true 00:14:52.648 11:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:52.907 [2024-07-25 11:26:08.588394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:52.907 [2024-07-25 11:26:08.588519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.907 [2024-07-25 11:26:08.588557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:52.907 [2024-07-25 11:26:08.588573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.907 [2024-07-25 11:26:08.591552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.907 [2024-07-25 11:26:08.591595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:52.907 BaseBdev1 00:14:52.907 11:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:52.907 11:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:53.166 BaseBdev2_malloc 00:14:53.166 11:26:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:53.425 true 00:14:53.425 11:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:53.684 [2024-07-25 11:26:09.368682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:53.684 [2024-07-25 11:26:09.368778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.684 [2024-07-25 11:26:09.368819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:53.684 [2024-07-25 11:26:09.368836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.684 [2024-07-25 11:26:09.371810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.684 [2024-07-25 11:26:09.371853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:53.684 BaseBdev2 00:14:53.684 11:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:53.684 11:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:53.943 BaseBdev3_malloc 00:14:53.943 11:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:54.235 true 00:14:54.235 11:26:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:54.235 [2024-07-25 11:26:10.073910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:54.235 [2024-07-25 11:26:10.074010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.235 [2024-07-25 11:26:10.074063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:54.235 [2024-07-25 11:26:10.074095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.235 [2024-07-25 11:26:10.077091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.235 [2024-07-25 11:26:10.077133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:54.235 BaseBdev3 00:14:54.235 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:54.494 [2024-07-25 11:26:10.350208] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.494 [2024-07-25 11:26:10.352811] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.494 [2024-07-25 11:26:10.352937] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.494 [2024-07-25 11:26:10.353260] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:54.494 [2024-07-25 11:26:10.353286] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:54.494 [2024-07-25 11:26:10.353703] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:54.494 [2024-07-25 11:26:10.353978] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:54.494 [2024-07-25 11:26:10.353996] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:54.494 [2024-07-25 11:26:10.354286] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.494 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.752 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.010 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.010 "name": "raid_bdev1", 00:14:55.010 "uuid": "bfb41328-2913-4d3c-a6a2-17b6ff30cf76", 00:14:55.010 "strip_size_kb": 0, 00:14:55.010 "state": "online", 00:14:55.010 "raid_level": "raid1", 00:14:55.010 "superblock": true, 00:14:55.010 "num_base_bdevs": 3, 00:14:55.010 "num_base_bdevs_discovered": 3, 00:14:55.010 "num_base_bdevs_operational": 3, 00:14:55.010 "base_bdevs_list": [ 00:14:55.010 { 00:14:55.010 "name": "BaseBdev1", 00:14:55.010 "uuid": "f3086e9c-0c46-5610-ab8e-8b0e7717d9fe", 00:14:55.010 "is_configured": true, 00:14:55.010 "data_offset": 2048, 00:14:55.010 "data_size": 63488 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "name": "BaseBdev2", 00:14:55.010 "uuid": "5026f2ea-de53-59f9-9751-f25e72ea40af", 00:14:55.010 "is_configured": true, 00:14:55.010 "data_offset": 2048, 00:14:55.010 "data_size": 63488 00:14:55.010 }, 00:14:55.010 { 00:14:55.010 "name": "BaseBdev3", 00:14:55.010 "uuid": "fc088baf-c66c-52a5-9b9f-2c36f9495af5", 00:14:55.010 "is_configured": true, 00:14:55.010 "data_offset": 2048, 00:14:55.010 "data_size": 63488 00:14:55.010 } 00:14:55.010 ] 00:14:55.010 }' 00:14:55.010 11:26:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.010 11:26:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.578 11:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:55.578 11:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:55.578 [2024-07-25 11:26:11.383882] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:56.513 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:56.771 [2024-07-25 11:26:12.536198] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:56.771 [2024-07-25 11:26:12.536269] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.771 [2024-07-25 11:26:12.536564] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:14:56.771 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:56.771 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=2 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.772 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.030 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.030 "name": "raid_bdev1", 00:14:57.030 "uuid": "bfb41328-2913-4d3c-a6a2-17b6ff30cf76", 00:14:57.030 "strip_size_kb": 0, 00:14:57.030 "state": "online", 00:14:57.030 "raid_level": "raid1", 00:14:57.030 "superblock": true, 00:14:57.030 "num_base_bdevs": 3, 00:14:57.030 "num_base_bdevs_discovered": 2, 00:14:57.030 "num_base_bdevs_operational": 2, 00:14:57.030 "base_bdevs_list": [ 00:14:57.030 { 00:14:57.030 "name": null, 00:14:57.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.030 "is_configured": false, 00:14:57.030 "data_offset": 2048, 00:14:57.030 "data_size": 63488 00:14:57.030 }, 00:14:57.030 { 00:14:57.030 "name": "BaseBdev2", 00:14:57.030 "uuid": "5026f2ea-de53-59f9-9751-f25e72ea40af", 00:14:57.030 "is_configured": true, 00:14:57.030 "data_offset": 2048, 00:14:57.030 "data_size": 63488 00:14:57.030 }, 00:14:57.030 { 00:14:57.030 "name": "BaseBdev3", 00:14:57.030 "uuid": "fc088baf-c66c-52a5-9b9f-2c36f9495af5", 00:14:57.030 "is_configured": true, 00:14:57.030 "data_offset": 2048, 00:14:57.030 "data_size": 63488 00:14:57.030 } 00:14:57.030 ] 00:14:57.030 }' 00:14:57.030 11:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.030 11:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:58.002 [2024-07-25 11:26:13.742783] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.002 [2024-07-25 11:26:13.742825] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.002 [2024-07-25 11:26:13.746003] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.002 [2024-07-25 11:26:13.746090] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.002 [2024-07-25 11:26:13.746196] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.002 [2024-07-25 11:26:13.746212] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:58.002 0 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 75790 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75790 ']' 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75790 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75790 00:14:58.002 killing process with pid 75790 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75790' 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75790 00:14:58.002 [2024-07-25 11:26:13.791346] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.002 11:26:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75790 00:14:58.260 [2024-07-25 11:26:14.005112] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.634 11:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.K6j488VyFU 00:14:59.634 11:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:59.634 11:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:59.634 11:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:14:59.634 11:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:14:59.634 11:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:59.634 11:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:59.634 ************************************ 00:14:59.634 END TEST raid_write_error_test 00:14:59.634 ************************************ 00:14:59.634 11:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:59.634 00:14:59.634 real 0m8.535s 00:14:59.634 user 0m12.981s 00:14:59.634 sys 0m1.030s 00:14:59.634 11:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.634 11:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.634 11:26:15 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:14:59.634 11:26:15 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:14:59.634 11:26:15 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:59.634 11:26:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:59.634 11:26:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.634 11:26:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.634 ************************************ 00:14:59.634 START TEST raid_state_function_test 00:14:59.634 ************************************ 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:59.634 Process raid pid: 75987 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=75987 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 75987' 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 75987 /var/tmp/spdk-raid.sock 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75987 ']' 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.634 11:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.634 [2024-07-25 11:26:15.385104] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:14:59.634 [2024-07-25 11:26:15.385262] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.892 [2024-07-25 11:26:15.553665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.150 [2024-07-25 11:26:15.803634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.150 [2024-07-25 11:26:16.013607] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.150 [2024-07-25 11:26:16.013681] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.715 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.715 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:00.715 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:00.973 [2024-07-25 11:26:16.693795] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.973 [2024-07-25 11:26:16.693874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.973 [2024-07-25 11:26:16.693895] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.973 [2024-07-25 11:26:16.693910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.973 [2024-07-25 11:26:16.693925] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.973 [2024-07-25 11:26:16.693938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.973 [2024-07-25 11:26:16.693951] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:00.973 [2024-07-25 11:26:16.693963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.973 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.231 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.231 "name": "Existed_Raid", 00:15:01.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.231 "strip_size_kb": 64, 00:15:01.231 "state": "configuring", 00:15:01.231 "raid_level": "raid0", 00:15:01.231 "superblock": false, 00:15:01.231 "num_base_bdevs": 4, 00:15:01.231 "num_base_bdevs_discovered": 0, 00:15:01.231 "num_base_bdevs_operational": 4, 00:15:01.231 "base_bdevs_list": [ 00:15:01.231 { 00:15:01.231 "name": "BaseBdev1", 00:15:01.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.231 "is_configured": false, 00:15:01.232 "data_offset": 0, 00:15:01.232 "data_size": 0 00:15:01.232 }, 00:15:01.232 { 00:15:01.232 "name": "BaseBdev2", 00:15:01.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.232 "is_configured": false, 00:15:01.232 "data_offset": 0, 00:15:01.232 "data_size": 0 00:15:01.232 }, 00:15:01.232 { 00:15:01.232 "name": "BaseBdev3", 00:15:01.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.232 "is_configured": false, 00:15:01.232 "data_offset": 0, 00:15:01.232 "data_size": 0 00:15:01.232 }, 00:15:01.232 { 00:15:01.232 "name": "BaseBdev4", 00:15:01.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.232 "is_configured": false, 00:15:01.232 "data_offset": 0, 00:15:01.232 "data_size": 0 00:15:01.232 } 00:15:01.232 ] 00:15:01.232 }' 00:15:01.232 11:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.232 11:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.799 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:02.056 [2024-07-25 11:26:17.921996] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.056 [2024-07-25 11:26:17.922268] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:02.314 11:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:02.571 [2024-07-25 11:26:18.198108] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:02.571 [2024-07-25 11:26:18.198410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:02.571 [2024-07-25 11:26:18.198537] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.571 [2024-07-25 11:26:18.198598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.571 [2024-07-25 11:26:18.198850] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:02.571 [2024-07-25 11:26:18.198879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:02.571 [2024-07-25 11:26:18.198894] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:02.571 [2024-07-25 11:26:18.198907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:02.571 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:02.829 [2024-07-25 11:26:18.479574] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.829 BaseBdev1 00:15:02.829 11:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:02.829 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:02.829 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:02.829 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:02.829 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:02.829 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:02.829 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:03.087 11:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:03.347 [ 00:15:03.347 { 00:15:03.347 "name": "BaseBdev1", 00:15:03.347 "aliases": [ 00:15:03.347 "173c918d-4c0a-418a-b497-ed6f9ecfda56" 00:15:03.347 ], 00:15:03.347 "product_name": "Malloc disk", 00:15:03.347 "block_size": 512, 00:15:03.347 "num_blocks": 65536, 00:15:03.347 "uuid": "173c918d-4c0a-418a-b497-ed6f9ecfda56", 00:15:03.347 "assigned_rate_limits": { 00:15:03.347 "rw_ios_per_sec": 0, 00:15:03.347 "rw_mbytes_per_sec": 0, 00:15:03.347 "r_mbytes_per_sec": 0, 00:15:03.347 "w_mbytes_per_sec": 0 00:15:03.347 }, 00:15:03.347 "claimed": true, 00:15:03.347 "claim_type": "exclusive_write", 00:15:03.347 "zoned": false, 00:15:03.347 "supported_io_types": { 00:15:03.347 "read": true, 00:15:03.347 "write": true, 00:15:03.347 "unmap": true, 00:15:03.347 "flush": true, 00:15:03.347 "reset": true, 00:15:03.347 "nvme_admin": false, 00:15:03.347 "nvme_io": false, 00:15:03.347 "nvme_io_md": false, 00:15:03.347 "write_zeroes": true, 00:15:03.347 "zcopy": true, 00:15:03.347 "get_zone_info": false, 00:15:03.347 "zone_management": false, 00:15:03.347 "zone_append": false, 00:15:03.347 "compare": false, 00:15:03.347 "compare_and_write": false, 00:15:03.347 "abort": true, 00:15:03.347 "seek_hole": false, 00:15:03.347 "seek_data": false, 00:15:03.347 "copy": true, 00:15:03.347 "nvme_iov_md": false 00:15:03.347 }, 00:15:03.347 "memory_domains": [ 00:15:03.347 { 00:15:03.347 "dma_device_id": "system", 00:15:03.347 "dma_device_type": 1 00:15:03.347 }, 00:15:03.347 { 00:15:03.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.347 "dma_device_type": 2 00:15:03.347 } 00:15:03.347 ], 00:15:03.347 "driver_specific": {} 00:15:03.347 } 00:15:03.347 ] 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.347 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.606 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.606 "name": "Existed_Raid", 00:15:03.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.606 "strip_size_kb": 64, 00:15:03.606 "state": "configuring", 00:15:03.606 "raid_level": "raid0", 00:15:03.606 "superblock": false, 00:15:03.606 "num_base_bdevs": 4, 00:15:03.606 "num_base_bdevs_discovered": 1, 00:15:03.606 "num_base_bdevs_operational": 4, 00:15:03.606 "base_bdevs_list": [ 00:15:03.606 { 00:15:03.606 "name": "BaseBdev1", 00:15:03.606 "uuid": "173c918d-4c0a-418a-b497-ed6f9ecfda56", 00:15:03.606 "is_configured": true, 00:15:03.606 "data_offset": 0, 00:15:03.606 "data_size": 65536 00:15:03.606 }, 00:15:03.606 { 00:15:03.606 "name": "BaseBdev2", 00:15:03.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.606 "is_configured": false, 00:15:03.606 "data_offset": 0, 00:15:03.606 "data_size": 0 00:15:03.606 }, 00:15:03.606 { 00:15:03.606 "name": "BaseBdev3", 00:15:03.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.606 "is_configured": false, 00:15:03.606 "data_offset": 0, 00:15:03.606 "data_size": 0 00:15:03.606 }, 00:15:03.606 { 00:15:03.606 "name": "BaseBdev4", 00:15:03.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.606 "is_configured": false, 00:15:03.606 "data_offset": 0, 00:15:03.606 "data_size": 0 00:15:03.606 } 00:15:03.606 ] 00:15:03.606 }' 00:15:03.606 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.606 11:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.171 11:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:04.430 [2024-07-25 11:26:20.244123] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.430 [2024-07-25 11:26:20.244202] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:04.430 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:04.688 [2024-07-25 11:26:20.484253] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.688 [2024-07-25 11:26:20.486702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.688 [2024-07-25 11:26:20.486763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.688 [2024-07-25 11:26:20.486783] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:04.688 [2024-07-25 11:26:20.486797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:04.688 [2024-07-25 11:26:20.486813] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:04.688 [2024-07-25 11:26:20.486826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.688 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.947 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.947 "name": "Existed_Raid", 00:15:04.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.947 "strip_size_kb": 64, 00:15:04.947 "state": "configuring", 00:15:04.947 "raid_level": "raid0", 00:15:04.947 "superblock": false, 00:15:04.947 "num_base_bdevs": 4, 00:15:04.947 "num_base_bdevs_discovered": 1, 00:15:04.947 "num_base_bdevs_operational": 4, 00:15:04.947 "base_bdevs_list": [ 00:15:04.947 { 00:15:04.947 "name": "BaseBdev1", 00:15:04.947 "uuid": "173c918d-4c0a-418a-b497-ed6f9ecfda56", 00:15:04.947 "is_configured": true, 00:15:04.947 "data_offset": 0, 00:15:04.947 "data_size": 65536 00:15:04.947 }, 00:15:04.947 { 00:15:04.947 "name": "BaseBdev2", 00:15:04.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.947 "is_configured": false, 00:15:04.947 "data_offset": 0, 00:15:04.947 "data_size": 0 00:15:04.947 }, 00:15:04.947 { 00:15:04.947 "name": "BaseBdev3", 00:15:04.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.947 "is_configured": false, 00:15:04.947 "data_offset": 0, 00:15:04.947 "data_size": 0 00:15:04.947 }, 00:15:04.947 { 00:15:04.947 "name": "BaseBdev4", 00:15:04.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.947 "is_configured": false, 00:15:04.947 "data_offset": 0, 00:15:04.947 "data_size": 0 00:15:04.947 } 00:15:04.947 ] 00:15:04.947 }' 00:15:04.947 11:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.947 11:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.882 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:05.882 [2024-07-25 11:26:21.684879] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.882 BaseBdev2 00:15:05.882 11:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:05.882 11:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:05.882 11:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:05.882 11:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:05.882 11:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:05.882 11:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:05.882 11:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:06.140 11:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:06.399 [ 00:15:06.399 { 00:15:06.399 "name": "BaseBdev2", 00:15:06.399 "aliases": [ 00:15:06.399 "ed338507-63a7-4b12-b3a1-a803ad5adf4b" 00:15:06.399 ], 00:15:06.399 "product_name": "Malloc disk", 00:15:06.399 "block_size": 512, 00:15:06.399 "num_blocks": 65536, 00:15:06.399 "uuid": "ed338507-63a7-4b12-b3a1-a803ad5adf4b", 00:15:06.399 "assigned_rate_limits": { 00:15:06.399 "rw_ios_per_sec": 0, 00:15:06.399 "rw_mbytes_per_sec": 0, 00:15:06.399 "r_mbytes_per_sec": 0, 00:15:06.399 "w_mbytes_per_sec": 0 00:15:06.399 }, 00:15:06.399 "claimed": true, 00:15:06.399 "claim_type": "exclusive_write", 00:15:06.399 "zoned": false, 00:15:06.399 "supported_io_types": { 00:15:06.399 "read": true, 00:15:06.399 "write": true, 00:15:06.399 "unmap": true, 00:15:06.399 "flush": true, 00:15:06.399 "reset": true, 00:15:06.399 "nvme_admin": false, 00:15:06.399 "nvme_io": false, 00:15:06.399 "nvme_io_md": false, 00:15:06.399 "write_zeroes": true, 00:15:06.399 "zcopy": true, 00:15:06.399 "get_zone_info": false, 00:15:06.399 "zone_management": false, 00:15:06.399 "zone_append": false, 00:15:06.399 "compare": false, 00:15:06.399 "compare_and_write": false, 00:15:06.399 "abort": true, 00:15:06.399 "seek_hole": false, 00:15:06.399 "seek_data": false, 00:15:06.399 "copy": true, 00:15:06.399 "nvme_iov_md": false 00:15:06.399 }, 00:15:06.399 "memory_domains": [ 00:15:06.399 { 00:15:06.399 "dma_device_id": "system", 00:15:06.399 "dma_device_type": 1 00:15:06.399 }, 00:15:06.399 { 00:15:06.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.399 "dma_device_type": 2 00:15:06.399 } 00:15:06.399 ], 00:15:06.399 "driver_specific": {} 00:15:06.399 } 00:15:06.399 ] 00:15:06.399 11:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:06.399 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:06.399 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:06.399 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:06.399 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:06.399 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:06.399 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:06.400 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.400 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:06.400 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.400 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.400 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.400 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.400 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.400 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.659 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.659 "name": "Existed_Raid", 00:15:06.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.659 "strip_size_kb": 64, 00:15:06.659 "state": "configuring", 00:15:06.659 "raid_level": "raid0", 00:15:06.659 "superblock": false, 00:15:06.659 "num_base_bdevs": 4, 00:15:06.659 "num_base_bdevs_discovered": 2, 00:15:06.659 "num_base_bdevs_operational": 4, 00:15:06.659 "base_bdevs_list": [ 00:15:06.659 { 00:15:06.659 "name": "BaseBdev1", 00:15:06.659 "uuid": "173c918d-4c0a-418a-b497-ed6f9ecfda56", 00:15:06.659 "is_configured": true, 00:15:06.659 "data_offset": 0, 00:15:06.659 "data_size": 65536 00:15:06.659 }, 00:15:06.659 { 00:15:06.659 "name": "BaseBdev2", 00:15:06.659 "uuid": "ed338507-63a7-4b12-b3a1-a803ad5adf4b", 00:15:06.659 "is_configured": true, 00:15:06.659 "data_offset": 0, 00:15:06.659 "data_size": 65536 00:15:06.659 }, 00:15:06.659 { 00:15:06.659 "name": "BaseBdev3", 00:15:06.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.659 "is_configured": false, 00:15:06.659 "data_offset": 0, 00:15:06.659 "data_size": 0 00:15:06.659 }, 00:15:06.659 { 00:15:06.659 "name": "BaseBdev4", 00:15:06.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.659 "is_configured": false, 00:15:06.659 "data_offset": 0, 00:15:06.659 "data_size": 0 00:15:06.659 } 00:15:06.659 ] 00:15:06.659 }' 00:15:06.659 11:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.659 11:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.595 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:07.595 [2024-07-25 11:26:23.448330] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.595 BaseBdev3 00:15:07.595 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:07.595 11:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:07.595 11:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:07.595 11:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:07.595 11:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:07.595 11:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:07.595 11:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:08.159 11:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:08.159 [ 00:15:08.159 { 00:15:08.159 "name": "BaseBdev3", 00:15:08.159 "aliases": [ 00:15:08.159 "90854941-6617-4b15-bd8b-2cfad90a2d10" 00:15:08.159 ], 00:15:08.159 "product_name": "Malloc disk", 00:15:08.159 "block_size": 512, 00:15:08.159 "num_blocks": 65536, 00:15:08.159 "uuid": "90854941-6617-4b15-bd8b-2cfad90a2d10", 00:15:08.159 "assigned_rate_limits": { 00:15:08.159 "rw_ios_per_sec": 0, 00:15:08.159 "rw_mbytes_per_sec": 0, 00:15:08.159 "r_mbytes_per_sec": 0, 00:15:08.159 "w_mbytes_per_sec": 0 00:15:08.159 }, 00:15:08.159 "claimed": true, 00:15:08.159 "claim_type": "exclusive_write", 00:15:08.159 "zoned": false, 00:15:08.159 "supported_io_types": { 00:15:08.159 "read": true, 00:15:08.159 "write": true, 00:15:08.159 "unmap": true, 00:15:08.159 "flush": true, 00:15:08.159 "reset": true, 00:15:08.159 "nvme_admin": false, 00:15:08.159 "nvme_io": false, 00:15:08.159 "nvme_io_md": false, 00:15:08.159 "write_zeroes": true, 00:15:08.159 "zcopy": true, 00:15:08.159 "get_zone_info": false, 00:15:08.159 "zone_management": false, 00:15:08.159 "zone_append": false, 00:15:08.159 "compare": false, 00:15:08.159 "compare_and_write": false, 00:15:08.159 "abort": true, 00:15:08.159 "seek_hole": false, 00:15:08.159 "seek_data": false, 00:15:08.159 "copy": true, 00:15:08.159 "nvme_iov_md": false 00:15:08.159 }, 00:15:08.159 "memory_domains": [ 00:15:08.159 { 00:15:08.159 "dma_device_id": "system", 00:15:08.159 "dma_device_type": 1 00:15:08.159 }, 00:15:08.159 { 00:15:08.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.159 "dma_device_type": 2 00:15:08.159 } 00:15:08.159 ], 00:15:08.159 "driver_specific": {} 00:15:08.159 } 00:15:08.159 ] 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.160 11:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.417 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.417 "name": "Existed_Raid", 00:15:08.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.417 "strip_size_kb": 64, 00:15:08.417 "state": "configuring", 00:15:08.417 "raid_level": "raid0", 00:15:08.417 "superblock": false, 00:15:08.417 "num_base_bdevs": 4, 00:15:08.417 "num_base_bdevs_discovered": 3, 00:15:08.417 "num_base_bdevs_operational": 4, 00:15:08.417 "base_bdevs_list": [ 00:15:08.417 { 00:15:08.417 "name": "BaseBdev1", 00:15:08.417 "uuid": "173c918d-4c0a-418a-b497-ed6f9ecfda56", 00:15:08.417 "is_configured": true, 00:15:08.417 "data_offset": 0, 00:15:08.417 "data_size": 65536 00:15:08.417 }, 00:15:08.417 { 00:15:08.417 "name": "BaseBdev2", 00:15:08.417 "uuid": "ed338507-63a7-4b12-b3a1-a803ad5adf4b", 00:15:08.417 "is_configured": true, 00:15:08.417 "data_offset": 0, 00:15:08.417 "data_size": 65536 00:15:08.417 }, 00:15:08.417 { 00:15:08.417 "name": "BaseBdev3", 00:15:08.417 "uuid": "90854941-6617-4b15-bd8b-2cfad90a2d10", 00:15:08.417 "is_configured": true, 00:15:08.417 "data_offset": 0, 00:15:08.417 "data_size": 65536 00:15:08.417 }, 00:15:08.417 { 00:15:08.417 "name": "BaseBdev4", 00:15:08.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.417 "is_configured": false, 00:15:08.417 "data_offset": 0, 00:15:08.417 "data_size": 0 00:15:08.417 } 00:15:08.417 ] 00:15:08.417 }' 00:15:08.417 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.417 11:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.351 11:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:09.351 [2024-07-25 11:26:25.211046] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:09.351 [2024-07-25 11:26:25.211114] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:09.351 [2024-07-25 11:26:25.211132] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:09.351 [2024-07-25 11:26:25.211464] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:09.351 [2024-07-25 11:26:25.211756] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:09.351 [2024-07-25 11:26:25.211774] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:09.351 [2024-07-25 11:26:25.212057] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.351 BaseBdev4 00:15:09.608 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:09.608 11:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:09.608 11:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:09.608 11:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:09.608 11:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:09.608 11:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:09.608 11:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.864 11:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:10.124 [ 00:15:10.124 { 00:15:10.124 "name": "BaseBdev4", 00:15:10.124 "aliases": [ 00:15:10.124 "0e680912-dbc7-4ba9-b599-356fa9c8b0e4" 00:15:10.124 ], 00:15:10.124 "product_name": "Malloc disk", 00:15:10.124 "block_size": 512, 00:15:10.124 "num_blocks": 65536, 00:15:10.124 "uuid": "0e680912-dbc7-4ba9-b599-356fa9c8b0e4", 00:15:10.124 "assigned_rate_limits": { 00:15:10.124 "rw_ios_per_sec": 0, 00:15:10.124 "rw_mbytes_per_sec": 0, 00:15:10.124 "r_mbytes_per_sec": 0, 00:15:10.124 "w_mbytes_per_sec": 0 00:15:10.124 }, 00:15:10.124 "claimed": true, 00:15:10.124 "claim_type": "exclusive_write", 00:15:10.124 "zoned": false, 00:15:10.124 "supported_io_types": { 00:15:10.124 "read": true, 00:15:10.124 "write": true, 00:15:10.124 "unmap": true, 00:15:10.124 "flush": true, 00:15:10.124 "reset": true, 00:15:10.124 "nvme_admin": false, 00:15:10.124 "nvme_io": false, 00:15:10.124 "nvme_io_md": false, 00:15:10.124 "write_zeroes": true, 00:15:10.124 "zcopy": true, 00:15:10.124 "get_zone_info": false, 00:15:10.124 "zone_management": false, 00:15:10.124 "zone_append": false, 00:15:10.124 "compare": false, 00:15:10.124 "compare_and_write": false, 00:15:10.124 "abort": true, 00:15:10.124 "seek_hole": false, 00:15:10.124 "seek_data": false, 00:15:10.124 "copy": true, 00:15:10.124 "nvme_iov_md": false 00:15:10.124 }, 00:15:10.124 "memory_domains": [ 00:15:10.124 { 00:15:10.124 "dma_device_id": "system", 00:15:10.124 "dma_device_type": 1 00:15:10.124 }, 00:15:10.124 { 00:15:10.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.124 "dma_device_type": 2 00:15:10.124 } 00:15:10.124 ], 00:15:10.124 "driver_specific": {} 00:15:10.124 } 00:15:10.124 ] 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.124 11:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.383 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.383 "name": "Existed_Raid", 00:15:10.383 "uuid": "2ec5649a-3a36-4476-b2fc-be7b15152b84", 00:15:10.383 "strip_size_kb": 64, 00:15:10.383 "state": "online", 00:15:10.384 "raid_level": "raid0", 00:15:10.384 "superblock": false, 00:15:10.384 "num_base_bdevs": 4, 00:15:10.384 "num_base_bdevs_discovered": 4, 00:15:10.384 "num_base_bdevs_operational": 4, 00:15:10.384 "base_bdevs_list": [ 00:15:10.384 { 00:15:10.384 "name": "BaseBdev1", 00:15:10.384 "uuid": "173c918d-4c0a-418a-b497-ed6f9ecfda56", 00:15:10.384 "is_configured": true, 00:15:10.384 "data_offset": 0, 00:15:10.384 "data_size": 65536 00:15:10.384 }, 00:15:10.384 { 00:15:10.384 "name": "BaseBdev2", 00:15:10.384 "uuid": "ed338507-63a7-4b12-b3a1-a803ad5adf4b", 00:15:10.384 "is_configured": true, 00:15:10.384 "data_offset": 0, 00:15:10.384 "data_size": 65536 00:15:10.384 }, 00:15:10.384 { 00:15:10.384 "name": "BaseBdev3", 00:15:10.384 "uuid": "90854941-6617-4b15-bd8b-2cfad90a2d10", 00:15:10.384 "is_configured": true, 00:15:10.384 "data_offset": 0, 00:15:10.384 "data_size": 65536 00:15:10.384 }, 00:15:10.384 { 00:15:10.384 "name": "BaseBdev4", 00:15:10.384 "uuid": "0e680912-dbc7-4ba9-b599-356fa9c8b0e4", 00:15:10.384 "is_configured": true, 00:15:10.384 "data_offset": 0, 00:15:10.384 "data_size": 65536 00:15:10.384 } 00:15:10.384 ] 00:15:10.384 }' 00:15:10.384 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.384 11:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.949 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:10.949 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:10.949 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:10.949 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:10.949 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:10.949 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:10.949 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:10.949 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:11.206 [2024-07-25 11:26:26.924039] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.206 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:11.206 "name": "Existed_Raid", 00:15:11.206 "aliases": [ 00:15:11.206 "2ec5649a-3a36-4476-b2fc-be7b15152b84" 00:15:11.206 ], 00:15:11.206 "product_name": "Raid Volume", 00:15:11.206 "block_size": 512, 00:15:11.206 "num_blocks": 262144, 00:15:11.206 "uuid": "2ec5649a-3a36-4476-b2fc-be7b15152b84", 00:15:11.206 "assigned_rate_limits": { 00:15:11.206 "rw_ios_per_sec": 0, 00:15:11.206 "rw_mbytes_per_sec": 0, 00:15:11.206 "r_mbytes_per_sec": 0, 00:15:11.206 "w_mbytes_per_sec": 0 00:15:11.206 }, 00:15:11.206 "claimed": false, 00:15:11.206 "zoned": false, 00:15:11.206 "supported_io_types": { 00:15:11.206 "read": true, 00:15:11.206 "write": true, 00:15:11.206 "unmap": true, 00:15:11.206 "flush": true, 00:15:11.206 "reset": true, 00:15:11.206 "nvme_admin": false, 00:15:11.206 "nvme_io": false, 00:15:11.206 "nvme_io_md": false, 00:15:11.206 "write_zeroes": true, 00:15:11.206 "zcopy": false, 00:15:11.206 "get_zone_info": false, 00:15:11.206 "zone_management": false, 00:15:11.206 "zone_append": false, 00:15:11.206 "compare": false, 00:15:11.206 "compare_and_write": false, 00:15:11.206 "abort": false, 00:15:11.206 "seek_hole": false, 00:15:11.206 "seek_data": false, 00:15:11.206 "copy": false, 00:15:11.206 "nvme_iov_md": false 00:15:11.206 }, 00:15:11.206 "memory_domains": [ 00:15:11.206 { 00:15:11.206 "dma_device_id": "system", 00:15:11.206 "dma_device_type": 1 00:15:11.206 }, 00:15:11.206 { 00:15:11.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.206 "dma_device_type": 2 00:15:11.206 }, 00:15:11.206 { 00:15:11.206 "dma_device_id": "system", 00:15:11.206 "dma_device_type": 1 00:15:11.206 }, 00:15:11.206 { 00:15:11.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.206 "dma_device_type": 2 00:15:11.206 }, 00:15:11.206 { 00:15:11.206 "dma_device_id": "system", 00:15:11.206 "dma_device_type": 1 00:15:11.206 }, 00:15:11.206 { 00:15:11.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.206 "dma_device_type": 2 00:15:11.206 }, 00:15:11.206 { 00:15:11.206 "dma_device_id": "system", 00:15:11.206 "dma_device_type": 1 00:15:11.206 }, 00:15:11.206 { 00:15:11.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.206 "dma_device_type": 2 00:15:11.206 } 00:15:11.206 ], 00:15:11.206 "driver_specific": { 00:15:11.206 "raid": { 00:15:11.206 "uuid": "2ec5649a-3a36-4476-b2fc-be7b15152b84", 00:15:11.206 "strip_size_kb": 64, 00:15:11.206 "state": "online", 00:15:11.206 "raid_level": "raid0", 00:15:11.206 "superblock": false, 00:15:11.206 "num_base_bdevs": 4, 00:15:11.206 "num_base_bdevs_discovered": 4, 00:15:11.206 "num_base_bdevs_operational": 4, 00:15:11.206 "base_bdevs_list": [ 00:15:11.206 { 00:15:11.206 "name": "BaseBdev1", 00:15:11.206 "uuid": "173c918d-4c0a-418a-b497-ed6f9ecfda56", 00:15:11.206 "is_configured": true, 00:15:11.206 "data_offset": 0, 00:15:11.206 "data_size": 65536 00:15:11.206 }, 00:15:11.206 { 00:15:11.206 "name": "BaseBdev2", 00:15:11.206 "uuid": "ed338507-63a7-4b12-b3a1-a803ad5adf4b", 00:15:11.206 "is_configured": true, 00:15:11.206 "data_offset": 0, 00:15:11.206 "data_size": 65536 00:15:11.206 }, 00:15:11.206 { 00:15:11.206 "name": "BaseBdev3", 00:15:11.206 "uuid": "90854941-6617-4b15-bd8b-2cfad90a2d10", 00:15:11.206 "is_configured": true, 00:15:11.206 "data_offset": 0, 00:15:11.206 "data_size": 65536 00:15:11.206 }, 00:15:11.206 { 00:15:11.206 "name": "BaseBdev4", 00:15:11.206 "uuid": "0e680912-dbc7-4ba9-b599-356fa9c8b0e4", 00:15:11.206 "is_configured": true, 00:15:11.206 "data_offset": 0, 00:15:11.206 "data_size": 65536 00:15:11.206 } 00:15:11.206 ] 00:15:11.206 } 00:15:11.206 } 00:15:11.206 }' 00:15:11.206 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.206 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:11.206 BaseBdev2 00:15:11.206 BaseBdev3 00:15:11.206 BaseBdev4' 00:15:11.206 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.207 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:11.207 11:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.464 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.464 "name": "BaseBdev1", 00:15:11.464 "aliases": [ 00:15:11.464 "173c918d-4c0a-418a-b497-ed6f9ecfda56" 00:15:11.464 ], 00:15:11.464 "product_name": "Malloc disk", 00:15:11.464 "block_size": 512, 00:15:11.464 "num_blocks": 65536, 00:15:11.464 "uuid": "173c918d-4c0a-418a-b497-ed6f9ecfda56", 00:15:11.464 "assigned_rate_limits": { 00:15:11.464 "rw_ios_per_sec": 0, 00:15:11.465 "rw_mbytes_per_sec": 0, 00:15:11.465 "r_mbytes_per_sec": 0, 00:15:11.465 "w_mbytes_per_sec": 0 00:15:11.465 }, 00:15:11.465 "claimed": true, 00:15:11.465 "claim_type": "exclusive_write", 00:15:11.465 "zoned": false, 00:15:11.465 "supported_io_types": { 00:15:11.465 "read": true, 00:15:11.465 "write": true, 00:15:11.465 "unmap": true, 00:15:11.465 "flush": true, 00:15:11.465 "reset": true, 00:15:11.465 "nvme_admin": false, 00:15:11.465 "nvme_io": false, 00:15:11.465 "nvme_io_md": false, 00:15:11.465 "write_zeroes": true, 00:15:11.465 "zcopy": true, 00:15:11.465 "get_zone_info": false, 00:15:11.465 "zone_management": false, 00:15:11.465 "zone_append": false, 00:15:11.465 "compare": false, 00:15:11.465 "compare_and_write": false, 00:15:11.465 "abort": true, 00:15:11.465 "seek_hole": false, 00:15:11.465 "seek_data": false, 00:15:11.465 "copy": true, 00:15:11.465 "nvme_iov_md": false 00:15:11.465 }, 00:15:11.465 "memory_domains": [ 00:15:11.465 { 00:15:11.465 "dma_device_id": "system", 00:15:11.465 "dma_device_type": 1 00:15:11.465 }, 00:15:11.465 { 00:15:11.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.465 "dma_device_type": 2 00:15:11.465 } 00:15:11.465 ], 00:15:11.465 "driver_specific": {} 00:15:11.465 }' 00:15:11.465 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.465 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.465 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.465 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.722 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.722 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.722 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.722 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.722 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.723 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.723 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.980 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.980 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.980 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:11.980 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.238 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.238 "name": "BaseBdev2", 00:15:12.238 "aliases": [ 00:15:12.238 "ed338507-63a7-4b12-b3a1-a803ad5adf4b" 00:15:12.238 ], 00:15:12.238 "product_name": "Malloc disk", 00:15:12.238 "block_size": 512, 00:15:12.238 "num_blocks": 65536, 00:15:12.238 "uuid": "ed338507-63a7-4b12-b3a1-a803ad5adf4b", 00:15:12.238 "assigned_rate_limits": { 00:15:12.238 "rw_ios_per_sec": 0, 00:15:12.238 "rw_mbytes_per_sec": 0, 00:15:12.238 "r_mbytes_per_sec": 0, 00:15:12.238 "w_mbytes_per_sec": 0 00:15:12.238 }, 00:15:12.238 "claimed": true, 00:15:12.238 "claim_type": "exclusive_write", 00:15:12.238 "zoned": false, 00:15:12.238 "supported_io_types": { 00:15:12.238 "read": true, 00:15:12.238 "write": true, 00:15:12.238 "unmap": true, 00:15:12.238 "flush": true, 00:15:12.238 "reset": true, 00:15:12.238 "nvme_admin": false, 00:15:12.238 "nvme_io": false, 00:15:12.238 "nvme_io_md": false, 00:15:12.238 "write_zeroes": true, 00:15:12.238 "zcopy": true, 00:15:12.238 "get_zone_info": false, 00:15:12.238 "zone_management": false, 00:15:12.238 "zone_append": false, 00:15:12.238 "compare": false, 00:15:12.238 "compare_and_write": false, 00:15:12.238 "abort": true, 00:15:12.238 "seek_hole": false, 00:15:12.238 "seek_data": false, 00:15:12.238 "copy": true, 00:15:12.238 "nvme_iov_md": false 00:15:12.238 }, 00:15:12.238 "memory_domains": [ 00:15:12.238 { 00:15:12.238 "dma_device_id": "system", 00:15:12.238 "dma_device_type": 1 00:15:12.238 }, 00:15:12.238 { 00:15:12.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.238 "dma_device_type": 2 00:15:12.238 } 00:15:12.238 ], 00:15:12.238 "driver_specific": {} 00:15:12.238 }' 00:15:12.238 11:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.238 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.238 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.238 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.497 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.497 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.497 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.497 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.497 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.497 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.497 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.755 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.755 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.755 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:12.755 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.013 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.013 "name": "BaseBdev3", 00:15:13.013 "aliases": [ 00:15:13.013 "90854941-6617-4b15-bd8b-2cfad90a2d10" 00:15:13.014 ], 00:15:13.014 "product_name": "Malloc disk", 00:15:13.014 "block_size": 512, 00:15:13.014 "num_blocks": 65536, 00:15:13.014 "uuid": "90854941-6617-4b15-bd8b-2cfad90a2d10", 00:15:13.014 "assigned_rate_limits": { 00:15:13.014 "rw_ios_per_sec": 0, 00:15:13.014 "rw_mbytes_per_sec": 0, 00:15:13.014 "r_mbytes_per_sec": 0, 00:15:13.014 "w_mbytes_per_sec": 0 00:15:13.014 }, 00:15:13.014 "claimed": true, 00:15:13.014 "claim_type": "exclusive_write", 00:15:13.014 "zoned": false, 00:15:13.014 "supported_io_types": { 00:15:13.014 "read": true, 00:15:13.014 "write": true, 00:15:13.014 "unmap": true, 00:15:13.014 "flush": true, 00:15:13.014 "reset": true, 00:15:13.014 "nvme_admin": false, 00:15:13.014 "nvme_io": false, 00:15:13.014 "nvme_io_md": false, 00:15:13.014 "write_zeroes": true, 00:15:13.014 "zcopy": true, 00:15:13.014 "get_zone_info": false, 00:15:13.014 "zone_management": false, 00:15:13.014 "zone_append": false, 00:15:13.014 "compare": false, 00:15:13.014 "compare_and_write": false, 00:15:13.014 "abort": true, 00:15:13.014 "seek_hole": false, 00:15:13.014 "seek_data": false, 00:15:13.014 "copy": true, 00:15:13.014 "nvme_iov_md": false 00:15:13.014 }, 00:15:13.014 "memory_domains": [ 00:15:13.014 { 00:15:13.014 "dma_device_id": "system", 00:15:13.014 "dma_device_type": 1 00:15:13.014 }, 00:15:13.014 { 00:15:13.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.014 "dma_device_type": 2 00:15:13.014 } 00:15:13.014 ], 00:15:13.014 "driver_specific": {} 00:15:13.014 }' 00:15:13.014 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.014 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.014 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:13.014 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.014 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.014 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:13.014 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.272 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.272 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:13.272 11:26:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.272 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.272 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:13.272 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:13.272 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:13.272 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.562 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.562 "name": "BaseBdev4", 00:15:13.562 "aliases": [ 00:15:13.562 "0e680912-dbc7-4ba9-b599-356fa9c8b0e4" 00:15:13.562 ], 00:15:13.562 "product_name": "Malloc disk", 00:15:13.562 "block_size": 512, 00:15:13.562 "num_blocks": 65536, 00:15:13.562 "uuid": "0e680912-dbc7-4ba9-b599-356fa9c8b0e4", 00:15:13.562 "assigned_rate_limits": { 00:15:13.562 "rw_ios_per_sec": 0, 00:15:13.562 "rw_mbytes_per_sec": 0, 00:15:13.562 "r_mbytes_per_sec": 0, 00:15:13.562 "w_mbytes_per_sec": 0 00:15:13.562 }, 00:15:13.562 "claimed": true, 00:15:13.562 "claim_type": "exclusive_write", 00:15:13.562 "zoned": false, 00:15:13.562 "supported_io_types": { 00:15:13.562 "read": true, 00:15:13.562 "write": true, 00:15:13.562 "unmap": true, 00:15:13.562 "flush": true, 00:15:13.562 "reset": true, 00:15:13.562 "nvme_admin": false, 00:15:13.562 "nvme_io": false, 00:15:13.562 "nvme_io_md": false, 00:15:13.562 "write_zeroes": true, 00:15:13.562 "zcopy": true, 00:15:13.562 "get_zone_info": false, 00:15:13.562 "zone_management": false, 00:15:13.562 "zone_append": false, 00:15:13.562 "compare": false, 00:15:13.562 "compare_and_write": false, 00:15:13.562 "abort": true, 00:15:13.562 "seek_hole": false, 00:15:13.562 "seek_data": false, 00:15:13.562 "copy": true, 00:15:13.562 "nvme_iov_md": false 00:15:13.562 }, 00:15:13.562 "memory_domains": [ 00:15:13.562 { 00:15:13.562 "dma_device_id": "system", 00:15:13.562 "dma_device_type": 1 00:15:13.562 }, 00:15:13.562 { 00:15:13.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.562 "dma_device_type": 2 00:15:13.562 } 00:15:13.562 ], 00:15:13.562 "driver_specific": {} 00:15:13.562 }' 00:15:13.562 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.562 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.562 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:13.562 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.823 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.823 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:13.823 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.823 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.823 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:13.823 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.081 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.081 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:14.081 11:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:14.351 [2024-07-25 11:26:29.976563] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.351 [2024-07-25 11:26:29.976608] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.351 [2024-07-25 11:26:29.976716] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.351 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.610 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.610 "name": "Existed_Raid", 00:15:14.610 "uuid": "2ec5649a-3a36-4476-b2fc-be7b15152b84", 00:15:14.610 "strip_size_kb": 64, 00:15:14.610 "state": "offline", 00:15:14.610 "raid_level": "raid0", 00:15:14.610 "superblock": false, 00:15:14.610 "num_base_bdevs": 4, 00:15:14.610 "num_base_bdevs_discovered": 3, 00:15:14.610 "num_base_bdevs_operational": 3, 00:15:14.610 "base_bdevs_list": [ 00:15:14.610 { 00:15:14.610 "name": null, 00:15:14.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.610 "is_configured": false, 00:15:14.610 "data_offset": 0, 00:15:14.610 "data_size": 65536 00:15:14.610 }, 00:15:14.610 { 00:15:14.610 "name": "BaseBdev2", 00:15:14.610 "uuid": "ed338507-63a7-4b12-b3a1-a803ad5adf4b", 00:15:14.610 "is_configured": true, 00:15:14.610 "data_offset": 0, 00:15:14.610 "data_size": 65536 00:15:14.610 }, 00:15:14.610 { 00:15:14.610 "name": "BaseBdev3", 00:15:14.610 "uuid": "90854941-6617-4b15-bd8b-2cfad90a2d10", 00:15:14.610 "is_configured": true, 00:15:14.610 "data_offset": 0, 00:15:14.610 "data_size": 65536 00:15:14.610 }, 00:15:14.610 { 00:15:14.610 "name": "BaseBdev4", 00:15:14.610 "uuid": "0e680912-dbc7-4ba9-b599-356fa9c8b0e4", 00:15:14.610 "is_configured": true, 00:15:14.610 "data_offset": 0, 00:15:14.610 "data_size": 65536 00:15:14.610 } 00:15:14.610 ] 00:15:14.610 }' 00:15:14.610 11:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.610 11:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.544 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:15.544 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:15.544 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.544 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:15.544 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:15.544 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.544 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:15.802 [2024-07-25 11:26:31.603401] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.060 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:16.060 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:16.060 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.060 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:16.319 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:16.319 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.319 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:16.578 [2024-07-25 11:26:32.274052] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:16.578 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:16.578 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:16.578 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.578 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:16.836 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:16.836 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.836 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:17.095 [2024-07-25 11:26:32.908729] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:17.095 [2024-07-25 11:26:32.908814] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:17.358 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:17.358 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:17.358 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.358 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:17.617 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:17.617 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:17.617 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:17.617 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:17.617 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:17.617 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:17.875 BaseBdev2 00:15:17.875 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:17.875 11:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:17.875 11:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:17.875 11:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:17.875 11:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:17.875 11:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:17.875 11:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:18.133 11:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:18.392 [ 00:15:18.392 { 00:15:18.392 "name": "BaseBdev2", 00:15:18.392 "aliases": [ 00:15:18.392 "eb364dac-da7d-4fd0-a649-a9a366744f2a" 00:15:18.392 ], 00:15:18.392 "product_name": "Malloc disk", 00:15:18.392 "block_size": 512, 00:15:18.392 "num_blocks": 65536, 00:15:18.392 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:18.392 "assigned_rate_limits": { 00:15:18.392 "rw_ios_per_sec": 0, 00:15:18.392 "rw_mbytes_per_sec": 0, 00:15:18.392 "r_mbytes_per_sec": 0, 00:15:18.392 "w_mbytes_per_sec": 0 00:15:18.392 }, 00:15:18.392 "claimed": false, 00:15:18.392 "zoned": false, 00:15:18.392 "supported_io_types": { 00:15:18.392 "read": true, 00:15:18.392 "write": true, 00:15:18.392 "unmap": true, 00:15:18.392 "flush": true, 00:15:18.392 "reset": true, 00:15:18.392 "nvme_admin": false, 00:15:18.392 "nvme_io": false, 00:15:18.392 "nvme_io_md": false, 00:15:18.392 "write_zeroes": true, 00:15:18.392 "zcopy": true, 00:15:18.392 "get_zone_info": false, 00:15:18.392 "zone_management": false, 00:15:18.392 "zone_append": false, 00:15:18.392 "compare": false, 00:15:18.392 "compare_and_write": false, 00:15:18.392 "abort": true, 00:15:18.392 "seek_hole": false, 00:15:18.392 "seek_data": false, 00:15:18.392 "copy": true, 00:15:18.392 "nvme_iov_md": false 00:15:18.392 }, 00:15:18.392 "memory_domains": [ 00:15:18.392 { 00:15:18.392 "dma_device_id": "system", 00:15:18.392 "dma_device_type": 1 00:15:18.392 }, 00:15:18.392 { 00:15:18.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.392 "dma_device_type": 2 00:15:18.392 } 00:15:18.392 ], 00:15:18.392 "driver_specific": {} 00:15:18.392 } 00:15:18.392 ] 00:15:18.392 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:18.392 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:18.392 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:18.392 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:18.651 BaseBdev3 00:15:18.651 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:18.651 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:18.651 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:18.651 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:18.651 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:18.651 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:18.651 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:18.908 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:19.167 [ 00:15:19.167 { 00:15:19.167 "name": "BaseBdev3", 00:15:19.167 "aliases": [ 00:15:19.167 "49a3026f-e560-4a32-886e-326cf9c1c41c" 00:15:19.167 ], 00:15:19.167 "product_name": "Malloc disk", 00:15:19.167 "block_size": 512, 00:15:19.167 "num_blocks": 65536, 00:15:19.167 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:19.167 "assigned_rate_limits": { 00:15:19.167 "rw_ios_per_sec": 0, 00:15:19.167 "rw_mbytes_per_sec": 0, 00:15:19.167 "r_mbytes_per_sec": 0, 00:15:19.167 "w_mbytes_per_sec": 0 00:15:19.167 }, 00:15:19.167 "claimed": false, 00:15:19.167 "zoned": false, 00:15:19.167 "supported_io_types": { 00:15:19.167 "read": true, 00:15:19.167 "write": true, 00:15:19.167 "unmap": true, 00:15:19.167 "flush": true, 00:15:19.167 "reset": true, 00:15:19.167 "nvme_admin": false, 00:15:19.167 "nvme_io": false, 00:15:19.167 "nvme_io_md": false, 00:15:19.167 "write_zeroes": true, 00:15:19.167 "zcopy": true, 00:15:19.167 "get_zone_info": false, 00:15:19.167 "zone_management": false, 00:15:19.167 "zone_append": false, 00:15:19.167 "compare": false, 00:15:19.167 "compare_and_write": false, 00:15:19.167 "abort": true, 00:15:19.167 "seek_hole": false, 00:15:19.167 "seek_data": false, 00:15:19.167 "copy": true, 00:15:19.167 "nvme_iov_md": false 00:15:19.167 }, 00:15:19.167 "memory_domains": [ 00:15:19.167 { 00:15:19.167 "dma_device_id": "system", 00:15:19.167 "dma_device_type": 1 00:15:19.167 }, 00:15:19.167 { 00:15:19.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.167 "dma_device_type": 2 00:15:19.167 } 00:15:19.167 ], 00:15:19.167 "driver_specific": {} 00:15:19.167 } 00:15:19.167 ] 00:15:19.167 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:19.167 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:19.167 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:19.167 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:19.426 BaseBdev4 00:15:19.684 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:19.684 11:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:19.684 11:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:19.684 11:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:19.684 11:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:19.684 11:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:19.684 11:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.944 11:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:19.944 [ 00:15:19.944 { 00:15:19.944 "name": "BaseBdev4", 00:15:19.944 "aliases": [ 00:15:19.944 "1a707421-8259-4ae1-a824-134ee7563405" 00:15:19.944 ], 00:15:19.944 "product_name": "Malloc disk", 00:15:19.944 "block_size": 512, 00:15:19.944 "num_blocks": 65536, 00:15:19.944 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:19.944 "assigned_rate_limits": { 00:15:19.944 "rw_ios_per_sec": 0, 00:15:19.944 "rw_mbytes_per_sec": 0, 00:15:19.944 "r_mbytes_per_sec": 0, 00:15:19.944 "w_mbytes_per_sec": 0 00:15:19.944 }, 00:15:19.944 "claimed": false, 00:15:19.944 "zoned": false, 00:15:19.944 "supported_io_types": { 00:15:19.944 "read": true, 00:15:19.944 "write": true, 00:15:19.944 "unmap": true, 00:15:19.944 "flush": true, 00:15:19.944 "reset": true, 00:15:19.944 "nvme_admin": false, 00:15:19.944 "nvme_io": false, 00:15:19.944 "nvme_io_md": false, 00:15:19.944 "write_zeroes": true, 00:15:19.944 "zcopy": true, 00:15:19.944 "get_zone_info": false, 00:15:19.944 "zone_management": false, 00:15:19.944 "zone_append": false, 00:15:19.944 "compare": false, 00:15:19.944 "compare_and_write": false, 00:15:19.944 "abort": true, 00:15:19.944 "seek_hole": false, 00:15:19.944 "seek_data": false, 00:15:19.944 "copy": true, 00:15:19.944 "nvme_iov_md": false 00:15:19.944 }, 00:15:19.944 "memory_domains": [ 00:15:19.944 { 00:15:19.944 "dma_device_id": "system", 00:15:19.944 "dma_device_type": 1 00:15:19.944 }, 00:15:19.944 { 00:15:19.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.944 "dma_device_type": 2 00:15:19.944 } 00:15:19.944 ], 00:15:19.944 "driver_specific": {} 00:15:19.944 } 00:15:19.944 ] 00:15:20.203 11:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:20.203 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:20.203 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:20.203 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:20.203 [2024-07-25 11:26:36.054678] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.203 [2024-07-25 11:26:36.054767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.203 [2024-07-25 11:26:36.054802] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.203 [2024-07-25 11:26:36.057153] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.203 [2024-07-25 11:26:36.057232] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.203 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.462 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.719 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.719 "name": "Existed_Raid", 00:15:20.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.719 "strip_size_kb": 64, 00:15:20.719 "state": "configuring", 00:15:20.719 "raid_level": "raid0", 00:15:20.719 "superblock": false, 00:15:20.719 "num_base_bdevs": 4, 00:15:20.719 "num_base_bdevs_discovered": 3, 00:15:20.719 "num_base_bdevs_operational": 4, 00:15:20.719 "base_bdevs_list": [ 00:15:20.719 { 00:15:20.719 "name": "BaseBdev1", 00:15:20.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.719 "is_configured": false, 00:15:20.719 "data_offset": 0, 00:15:20.719 "data_size": 0 00:15:20.719 }, 00:15:20.719 { 00:15:20.719 "name": "BaseBdev2", 00:15:20.719 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:20.719 "is_configured": true, 00:15:20.719 "data_offset": 0, 00:15:20.719 "data_size": 65536 00:15:20.719 }, 00:15:20.719 { 00:15:20.719 "name": "BaseBdev3", 00:15:20.719 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:20.719 "is_configured": true, 00:15:20.719 "data_offset": 0, 00:15:20.719 "data_size": 65536 00:15:20.719 }, 00:15:20.719 { 00:15:20.719 "name": "BaseBdev4", 00:15:20.719 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:20.719 "is_configured": true, 00:15:20.719 "data_offset": 0, 00:15:20.719 "data_size": 65536 00:15:20.719 } 00:15:20.719 ] 00:15:20.719 }' 00:15:20.719 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.719 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.286 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:21.544 [2024-07-25 11:26:37.319021] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:21.544 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:21.544 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.544 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.544 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:21.545 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.545 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:21.545 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.545 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.545 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.545 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.545 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.545 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.803 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.803 "name": "Existed_Raid", 00:15:21.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.803 "strip_size_kb": 64, 00:15:21.803 "state": "configuring", 00:15:21.803 "raid_level": "raid0", 00:15:21.803 "superblock": false, 00:15:21.803 "num_base_bdevs": 4, 00:15:21.804 "num_base_bdevs_discovered": 2, 00:15:21.804 "num_base_bdevs_operational": 4, 00:15:21.804 "base_bdevs_list": [ 00:15:21.804 { 00:15:21.804 "name": "BaseBdev1", 00:15:21.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.804 "is_configured": false, 00:15:21.804 "data_offset": 0, 00:15:21.804 "data_size": 0 00:15:21.804 }, 00:15:21.804 { 00:15:21.804 "name": null, 00:15:21.804 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:21.804 "is_configured": false, 00:15:21.804 "data_offset": 0, 00:15:21.804 "data_size": 65536 00:15:21.804 }, 00:15:21.804 { 00:15:21.804 "name": "BaseBdev3", 00:15:21.804 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:21.804 "is_configured": true, 00:15:21.804 "data_offset": 0, 00:15:21.804 "data_size": 65536 00:15:21.804 }, 00:15:21.804 { 00:15:21.804 "name": "BaseBdev4", 00:15:21.804 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:21.804 "is_configured": true, 00:15:21.804 "data_offset": 0, 00:15:21.804 "data_size": 65536 00:15:21.804 } 00:15:21.804 ] 00:15:21.804 }' 00:15:21.804 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.804 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.738 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:22.738 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.738 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:22.738 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.304 [2024-07-25 11:26:38.955418] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.305 BaseBdev1 00:15:23.305 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:23.305 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:23.305 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:23.305 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:23.305 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:23.305 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:23.305 11:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.563 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.822 [ 00:15:23.822 { 00:15:23.822 "name": "BaseBdev1", 00:15:23.822 "aliases": [ 00:15:23.822 "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec" 00:15:23.822 ], 00:15:23.822 "product_name": "Malloc disk", 00:15:23.822 "block_size": 512, 00:15:23.822 "num_blocks": 65536, 00:15:23.822 "uuid": "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec", 00:15:23.822 "assigned_rate_limits": { 00:15:23.822 "rw_ios_per_sec": 0, 00:15:23.822 "rw_mbytes_per_sec": 0, 00:15:23.822 "r_mbytes_per_sec": 0, 00:15:23.822 "w_mbytes_per_sec": 0 00:15:23.822 }, 00:15:23.822 "claimed": true, 00:15:23.822 "claim_type": "exclusive_write", 00:15:23.822 "zoned": false, 00:15:23.822 "supported_io_types": { 00:15:23.822 "read": true, 00:15:23.822 "write": true, 00:15:23.822 "unmap": true, 00:15:23.822 "flush": true, 00:15:23.822 "reset": true, 00:15:23.822 "nvme_admin": false, 00:15:23.822 "nvme_io": false, 00:15:23.822 "nvme_io_md": false, 00:15:23.822 "write_zeroes": true, 00:15:23.822 "zcopy": true, 00:15:23.822 "get_zone_info": false, 00:15:23.822 "zone_management": false, 00:15:23.822 "zone_append": false, 00:15:23.822 "compare": false, 00:15:23.822 "compare_and_write": false, 00:15:23.822 "abort": true, 00:15:23.822 "seek_hole": false, 00:15:23.822 "seek_data": false, 00:15:23.822 "copy": true, 00:15:23.822 "nvme_iov_md": false 00:15:23.822 }, 00:15:23.822 "memory_domains": [ 00:15:23.822 { 00:15:23.822 "dma_device_id": "system", 00:15:23.822 "dma_device_type": 1 00:15:23.822 }, 00:15:23.822 { 00:15:23.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.822 "dma_device_type": 2 00:15:23.822 } 00:15:23.822 ], 00:15:23.822 "driver_specific": {} 00:15:23.822 } 00:15:23.822 ] 00:15:23.822 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:23.822 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:23.822 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.822 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.822 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:23.822 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:23.822 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:23.822 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.822 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.823 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.823 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.823 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.823 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.081 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.081 "name": "Existed_Raid", 00:15:24.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.081 "strip_size_kb": 64, 00:15:24.081 "state": "configuring", 00:15:24.081 "raid_level": "raid0", 00:15:24.081 "superblock": false, 00:15:24.081 "num_base_bdevs": 4, 00:15:24.081 "num_base_bdevs_discovered": 3, 00:15:24.081 "num_base_bdevs_operational": 4, 00:15:24.081 "base_bdevs_list": [ 00:15:24.081 { 00:15:24.081 "name": "BaseBdev1", 00:15:24.081 "uuid": "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec", 00:15:24.081 "is_configured": true, 00:15:24.081 "data_offset": 0, 00:15:24.081 "data_size": 65536 00:15:24.081 }, 00:15:24.081 { 00:15:24.081 "name": null, 00:15:24.081 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:24.081 "is_configured": false, 00:15:24.081 "data_offset": 0, 00:15:24.081 "data_size": 65536 00:15:24.081 }, 00:15:24.081 { 00:15:24.081 "name": "BaseBdev3", 00:15:24.081 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:24.081 "is_configured": true, 00:15:24.081 "data_offset": 0, 00:15:24.081 "data_size": 65536 00:15:24.081 }, 00:15:24.081 { 00:15:24.081 "name": "BaseBdev4", 00:15:24.081 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:24.081 "is_configured": true, 00:15:24.081 "data_offset": 0, 00:15:24.081 "data_size": 65536 00:15:24.081 } 00:15:24.081 ] 00:15:24.081 }' 00:15:24.081 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.081 11:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.015 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.015 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:25.015 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:25.015 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:25.273 [2024-07-25 11:26:41.152204] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.531 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.789 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.789 "name": "Existed_Raid", 00:15:25.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.789 "strip_size_kb": 64, 00:15:25.789 "state": "configuring", 00:15:25.789 "raid_level": "raid0", 00:15:25.789 "superblock": false, 00:15:25.789 "num_base_bdevs": 4, 00:15:25.789 "num_base_bdevs_discovered": 2, 00:15:25.789 "num_base_bdevs_operational": 4, 00:15:25.789 "base_bdevs_list": [ 00:15:25.789 { 00:15:25.789 "name": "BaseBdev1", 00:15:25.789 "uuid": "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec", 00:15:25.789 "is_configured": true, 00:15:25.789 "data_offset": 0, 00:15:25.789 "data_size": 65536 00:15:25.789 }, 00:15:25.789 { 00:15:25.789 "name": null, 00:15:25.789 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:25.789 "is_configured": false, 00:15:25.789 "data_offset": 0, 00:15:25.789 "data_size": 65536 00:15:25.789 }, 00:15:25.789 { 00:15:25.789 "name": null, 00:15:25.789 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:25.789 "is_configured": false, 00:15:25.789 "data_offset": 0, 00:15:25.789 "data_size": 65536 00:15:25.789 }, 00:15:25.789 { 00:15:25.789 "name": "BaseBdev4", 00:15:25.789 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:25.789 "is_configured": true, 00:15:25.789 "data_offset": 0, 00:15:25.789 "data_size": 65536 00:15:25.789 } 00:15:25.789 ] 00:15:25.789 }' 00:15:25.789 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.789 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.355 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.355 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:26.613 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:26.613 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:26.871 [2024-07-25 11:26:42.680724] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.871 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.436 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.436 "name": "Existed_Raid", 00:15:27.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.436 "strip_size_kb": 64, 00:15:27.436 "state": "configuring", 00:15:27.436 "raid_level": "raid0", 00:15:27.436 "superblock": false, 00:15:27.436 "num_base_bdevs": 4, 00:15:27.436 "num_base_bdevs_discovered": 3, 00:15:27.436 "num_base_bdevs_operational": 4, 00:15:27.436 "base_bdevs_list": [ 00:15:27.436 { 00:15:27.436 "name": "BaseBdev1", 00:15:27.436 "uuid": "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec", 00:15:27.436 "is_configured": true, 00:15:27.436 "data_offset": 0, 00:15:27.436 "data_size": 65536 00:15:27.436 }, 00:15:27.436 { 00:15:27.436 "name": null, 00:15:27.436 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:27.436 "is_configured": false, 00:15:27.436 "data_offset": 0, 00:15:27.436 "data_size": 65536 00:15:27.436 }, 00:15:27.436 { 00:15:27.436 "name": "BaseBdev3", 00:15:27.436 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:27.436 "is_configured": true, 00:15:27.436 "data_offset": 0, 00:15:27.436 "data_size": 65536 00:15:27.436 }, 00:15:27.436 { 00:15:27.436 "name": "BaseBdev4", 00:15:27.436 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:27.436 "is_configured": true, 00:15:27.436 "data_offset": 0, 00:15:27.436 "data_size": 65536 00:15:27.436 } 00:15:27.436 ] 00:15:27.436 }' 00:15:27.437 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.437 11:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.059 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.059 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:28.059 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:28.059 11:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:28.316 [2024-07-25 11:26:44.137163] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.574 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.831 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.831 "name": "Existed_Raid", 00:15:28.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.831 "strip_size_kb": 64, 00:15:28.831 "state": "configuring", 00:15:28.831 "raid_level": "raid0", 00:15:28.831 "superblock": false, 00:15:28.831 "num_base_bdevs": 4, 00:15:28.831 "num_base_bdevs_discovered": 2, 00:15:28.831 "num_base_bdevs_operational": 4, 00:15:28.831 "base_bdevs_list": [ 00:15:28.831 { 00:15:28.831 "name": null, 00:15:28.831 "uuid": "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec", 00:15:28.831 "is_configured": false, 00:15:28.831 "data_offset": 0, 00:15:28.831 "data_size": 65536 00:15:28.831 }, 00:15:28.831 { 00:15:28.831 "name": null, 00:15:28.831 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:28.831 "is_configured": false, 00:15:28.831 "data_offset": 0, 00:15:28.831 "data_size": 65536 00:15:28.831 }, 00:15:28.831 { 00:15:28.831 "name": "BaseBdev3", 00:15:28.831 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:28.831 "is_configured": true, 00:15:28.831 "data_offset": 0, 00:15:28.831 "data_size": 65536 00:15:28.831 }, 00:15:28.831 { 00:15:28.831 "name": "BaseBdev4", 00:15:28.831 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:28.831 "is_configured": true, 00:15:28.831 "data_offset": 0, 00:15:28.831 "data_size": 65536 00:15:28.831 } 00:15:28.831 ] 00:15:28.831 }' 00:15:28.831 11:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.831 11:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.398 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.398 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:29.656 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:29.656 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:29.915 [2024-07-25 11:26:45.636395] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.915 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.262 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.262 "name": "Existed_Raid", 00:15:30.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.262 "strip_size_kb": 64, 00:15:30.262 "state": "configuring", 00:15:30.262 "raid_level": "raid0", 00:15:30.262 "superblock": false, 00:15:30.262 "num_base_bdevs": 4, 00:15:30.262 "num_base_bdevs_discovered": 3, 00:15:30.262 "num_base_bdevs_operational": 4, 00:15:30.262 "base_bdevs_list": [ 00:15:30.262 { 00:15:30.262 "name": null, 00:15:30.262 "uuid": "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec", 00:15:30.262 "is_configured": false, 00:15:30.262 "data_offset": 0, 00:15:30.262 "data_size": 65536 00:15:30.262 }, 00:15:30.262 { 00:15:30.262 "name": "BaseBdev2", 00:15:30.262 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:30.262 "is_configured": true, 00:15:30.262 "data_offset": 0, 00:15:30.262 "data_size": 65536 00:15:30.262 }, 00:15:30.262 { 00:15:30.262 "name": "BaseBdev3", 00:15:30.262 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:30.262 "is_configured": true, 00:15:30.262 "data_offset": 0, 00:15:30.262 "data_size": 65536 00:15:30.262 }, 00:15:30.262 { 00:15:30.262 "name": "BaseBdev4", 00:15:30.262 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:30.262 "is_configured": true, 00:15:30.262 "data_offset": 0, 00:15:30.262 "data_size": 65536 00:15:30.262 } 00:15:30.262 ] 00:15:30.262 }' 00:15:30.262 11:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.262 11:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.830 11:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.831 11:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:31.092 11:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:31.092 11:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:31.092 11:26:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.657 11:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c7fbd9e5-1abd-476d-b7d3-efee0554d1ec 00:15:31.915 [2024-07-25 11:26:47.544461] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:31.915 [2024-07-25 11:26:47.544531] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:31.915 [2024-07-25 11:26:47.544554] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:31.915 [2024-07-25 11:26:47.544891] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:31.915 NewBaseBdev 00:15:31.915 [2024-07-25 11:26:47.545081] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:31.915 [2024-07-25 11:26:47.545104] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:31.915 [2024-07-25 11:26:47.545386] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.915 11:26:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:31.915 11:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:31.915 11:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:31.915 11:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:31.915 11:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:31.915 11:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:31.915 11:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:32.174 11:26:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:32.174 [ 00:15:32.174 { 00:15:32.174 "name": "NewBaseBdev", 00:15:32.174 "aliases": [ 00:15:32.174 "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec" 00:15:32.174 ], 00:15:32.174 "product_name": "Malloc disk", 00:15:32.174 "block_size": 512, 00:15:32.174 "num_blocks": 65536, 00:15:32.174 "uuid": "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec", 00:15:32.174 "assigned_rate_limits": { 00:15:32.174 "rw_ios_per_sec": 0, 00:15:32.174 "rw_mbytes_per_sec": 0, 00:15:32.174 "r_mbytes_per_sec": 0, 00:15:32.174 "w_mbytes_per_sec": 0 00:15:32.174 }, 00:15:32.174 "claimed": true, 00:15:32.174 "claim_type": "exclusive_write", 00:15:32.174 "zoned": false, 00:15:32.174 "supported_io_types": { 00:15:32.174 "read": true, 00:15:32.174 "write": true, 00:15:32.174 "unmap": true, 00:15:32.174 "flush": true, 00:15:32.174 "reset": true, 00:15:32.174 "nvme_admin": false, 00:15:32.174 "nvme_io": false, 00:15:32.174 "nvme_io_md": false, 00:15:32.174 "write_zeroes": true, 00:15:32.174 "zcopy": true, 00:15:32.174 "get_zone_info": false, 00:15:32.174 "zone_management": false, 00:15:32.174 "zone_append": false, 00:15:32.174 "compare": false, 00:15:32.174 "compare_and_write": false, 00:15:32.174 "abort": true, 00:15:32.174 "seek_hole": false, 00:15:32.174 "seek_data": false, 00:15:32.174 "copy": true, 00:15:32.174 "nvme_iov_md": false 00:15:32.174 }, 00:15:32.174 "memory_domains": [ 00:15:32.174 { 00:15:32.174 "dma_device_id": "system", 00:15:32.174 "dma_device_type": 1 00:15:32.174 }, 00:15:32.174 { 00:15:32.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.174 "dma_device_type": 2 00:15:32.174 } 00:15:32.174 ], 00:15:32.174 "driver_specific": {} 00:15:32.174 } 00:15:32.174 ] 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.174 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.432 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.432 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.432 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.432 "name": "Existed_Raid", 00:15:32.432 "uuid": "0745cce2-c0b1-4068-92f4-9b5169d83fce", 00:15:32.432 "strip_size_kb": 64, 00:15:32.432 "state": "online", 00:15:32.432 "raid_level": "raid0", 00:15:32.432 "superblock": false, 00:15:32.432 "num_base_bdevs": 4, 00:15:32.432 "num_base_bdevs_discovered": 4, 00:15:32.432 "num_base_bdevs_operational": 4, 00:15:32.432 "base_bdevs_list": [ 00:15:32.432 { 00:15:32.432 "name": "NewBaseBdev", 00:15:32.432 "uuid": "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec", 00:15:32.432 "is_configured": true, 00:15:32.432 "data_offset": 0, 00:15:32.432 "data_size": 65536 00:15:32.432 }, 00:15:32.432 { 00:15:32.432 "name": "BaseBdev2", 00:15:32.432 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:32.432 "is_configured": true, 00:15:32.432 "data_offset": 0, 00:15:32.432 "data_size": 65536 00:15:32.432 }, 00:15:32.432 { 00:15:32.432 "name": "BaseBdev3", 00:15:32.432 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:32.432 "is_configured": true, 00:15:32.432 "data_offset": 0, 00:15:32.432 "data_size": 65536 00:15:32.432 }, 00:15:32.432 { 00:15:32.432 "name": "BaseBdev4", 00:15:32.432 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:32.432 "is_configured": true, 00:15:32.432 "data_offset": 0, 00:15:32.432 "data_size": 65536 00:15:32.432 } 00:15:32.432 ] 00:15:32.432 }' 00:15:32.432 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.432 11:26:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.365 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.365 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:33.365 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:33.365 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:33.365 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:33.365 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:33.365 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:33.365 11:26:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:33.365 [2024-07-25 11:26:49.245474] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.623 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:33.623 "name": "Existed_Raid", 00:15:33.623 "aliases": [ 00:15:33.623 "0745cce2-c0b1-4068-92f4-9b5169d83fce" 00:15:33.623 ], 00:15:33.623 "product_name": "Raid Volume", 00:15:33.623 "block_size": 512, 00:15:33.623 "num_blocks": 262144, 00:15:33.623 "uuid": "0745cce2-c0b1-4068-92f4-9b5169d83fce", 00:15:33.623 "assigned_rate_limits": { 00:15:33.623 "rw_ios_per_sec": 0, 00:15:33.623 "rw_mbytes_per_sec": 0, 00:15:33.623 "r_mbytes_per_sec": 0, 00:15:33.623 "w_mbytes_per_sec": 0 00:15:33.623 }, 00:15:33.623 "claimed": false, 00:15:33.623 "zoned": false, 00:15:33.623 "supported_io_types": { 00:15:33.623 "read": true, 00:15:33.623 "write": true, 00:15:33.623 "unmap": true, 00:15:33.623 "flush": true, 00:15:33.623 "reset": true, 00:15:33.623 "nvme_admin": false, 00:15:33.623 "nvme_io": false, 00:15:33.623 "nvme_io_md": false, 00:15:33.623 "write_zeroes": true, 00:15:33.623 "zcopy": false, 00:15:33.623 "get_zone_info": false, 00:15:33.623 "zone_management": false, 00:15:33.623 "zone_append": false, 00:15:33.623 "compare": false, 00:15:33.623 "compare_and_write": false, 00:15:33.623 "abort": false, 00:15:33.623 "seek_hole": false, 00:15:33.623 "seek_data": false, 00:15:33.623 "copy": false, 00:15:33.623 "nvme_iov_md": false 00:15:33.623 }, 00:15:33.623 "memory_domains": [ 00:15:33.623 { 00:15:33.623 "dma_device_id": "system", 00:15:33.623 "dma_device_type": 1 00:15:33.623 }, 00:15:33.623 { 00:15:33.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.623 "dma_device_type": 2 00:15:33.623 }, 00:15:33.623 { 00:15:33.623 "dma_device_id": "system", 00:15:33.623 "dma_device_type": 1 00:15:33.623 }, 00:15:33.623 { 00:15:33.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.623 "dma_device_type": 2 00:15:33.623 }, 00:15:33.623 { 00:15:33.623 "dma_device_id": "system", 00:15:33.623 "dma_device_type": 1 00:15:33.623 }, 00:15:33.623 { 00:15:33.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.623 "dma_device_type": 2 00:15:33.623 }, 00:15:33.623 { 00:15:33.623 "dma_device_id": "system", 00:15:33.623 "dma_device_type": 1 00:15:33.623 }, 00:15:33.623 { 00:15:33.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.623 "dma_device_type": 2 00:15:33.623 } 00:15:33.623 ], 00:15:33.623 "driver_specific": { 00:15:33.623 "raid": { 00:15:33.623 "uuid": "0745cce2-c0b1-4068-92f4-9b5169d83fce", 00:15:33.623 "strip_size_kb": 64, 00:15:33.623 "state": "online", 00:15:33.623 "raid_level": "raid0", 00:15:33.623 "superblock": false, 00:15:33.623 "num_base_bdevs": 4, 00:15:33.623 "num_base_bdevs_discovered": 4, 00:15:33.623 "num_base_bdevs_operational": 4, 00:15:33.623 "base_bdevs_list": [ 00:15:33.623 { 00:15:33.623 "name": "NewBaseBdev", 00:15:33.623 "uuid": "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec", 00:15:33.623 "is_configured": true, 00:15:33.624 "data_offset": 0, 00:15:33.624 "data_size": 65536 00:15:33.624 }, 00:15:33.624 { 00:15:33.624 "name": "BaseBdev2", 00:15:33.624 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:33.624 "is_configured": true, 00:15:33.624 "data_offset": 0, 00:15:33.624 "data_size": 65536 00:15:33.624 }, 00:15:33.624 { 00:15:33.624 "name": "BaseBdev3", 00:15:33.624 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:33.624 "is_configured": true, 00:15:33.624 "data_offset": 0, 00:15:33.624 "data_size": 65536 00:15:33.624 }, 00:15:33.624 { 00:15:33.624 "name": "BaseBdev4", 00:15:33.624 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:33.624 "is_configured": true, 00:15:33.624 "data_offset": 0, 00:15:33.624 "data_size": 65536 00:15:33.624 } 00:15:33.624 ] 00:15:33.624 } 00:15:33.624 } 00:15:33.624 }' 00:15:33.624 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.624 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:33.624 BaseBdev2 00:15:33.624 BaseBdev3 00:15:33.624 BaseBdev4' 00:15:33.624 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:33.624 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:33.624 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:33.882 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:33.882 "name": "NewBaseBdev", 00:15:33.882 "aliases": [ 00:15:33.882 "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec" 00:15:33.882 ], 00:15:33.882 "product_name": "Malloc disk", 00:15:33.882 "block_size": 512, 00:15:33.882 "num_blocks": 65536, 00:15:33.882 "uuid": "c7fbd9e5-1abd-476d-b7d3-efee0554d1ec", 00:15:33.882 "assigned_rate_limits": { 00:15:33.882 "rw_ios_per_sec": 0, 00:15:33.882 "rw_mbytes_per_sec": 0, 00:15:33.882 "r_mbytes_per_sec": 0, 00:15:33.882 "w_mbytes_per_sec": 0 00:15:33.882 }, 00:15:33.882 "claimed": true, 00:15:33.882 "claim_type": "exclusive_write", 00:15:33.882 "zoned": false, 00:15:33.882 "supported_io_types": { 00:15:33.882 "read": true, 00:15:33.882 "write": true, 00:15:33.882 "unmap": true, 00:15:33.882 "flush": true, 00:15:33.882 "reset": true, 00:15:33.882 "nvme_admin": false, 00:15:33.882 "nvme_io": false, 00:15:33.882 "nvme_io_md": false, 00:15:33.882 "write_zeroes": true, 00:15:33.882 "zcopy": true, 00:15:33.882 "get_zone_info": false, 00:15:33.882 "zone_management": false, 00:15:33.882 "zone_append": false, 00:15:33.882 "compare": false, 00:15:33.882 "compare_and_write": false, 00:15:33.882 "abort": true, 00:15:33.882 "seek_hole": false, 00:15:33.882 "seek_data": false, 00:15:33.882 "copy": true, 00:15:33.882 "nvme_iov_md": false 00:15:33.882 }, 00:15:33.882 "memory_domains": [ 00:15:33.882 { 00:15:33.882 "dma_device_id": "system", 00:15:33.882 "dma_device_type": 1 00:15:33.882 }, 00:15:33.882 { 00:15:33.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.882 "dma_device_type": 2 00:15:33.882 } 00:15:33.882 ], 00:15:33.882 "driver_specific": {} 00:15:33.882 }' 00:15:33.882 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.882 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.882 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:33.882 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.882 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.140 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.140 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.140 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.140 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.140 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.140 11:26:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.140 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.140 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.140 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:34.140 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.706 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.706 "name": "BaseBdev2", 00:15:34.706 "aliases": [ 00:15:34.706 "eb364dac-da7d-4fd0-a649-a9a366744f2a" 00:15:34.706 ], 00:15:34.706 "product_name": "Malloc disk", 00:15:34.706 "block_size": 512, 00:15:34.706 "num_blocks": 65536, 00:15:34.706 "uuid": "eb364dac-da7d-4fd0-a649-a9a366744f2a", 00:15:34.706 "assigned_rate_limits": { 00:15:34.706 "rw_ios_per_sec": 0, 00:15:34.706 "rw_mbytes_per_sec": 0, 00:15:34.706 "r_mbytes_per_sec": 0, 00:15:34.706 "w_mbytes_per_sec": 0 00:15:34.706 }, 00:15:34.706 "claimed": true, 00:15:34.706 "claim_type": "exclusive_write", 00:15:34.706 "zoned": false, 00:15:34.706 "supported_io_types": { 00:15:34.706 "read": true, 00:15:34.706 "write": true, 00:15:34.706 "unmap": true, 00:15:34.706 "flush": true, 00:15:34.706 "reset": true, 00:15:34.706 "nvme_admin": false, 00:15:34.706 "nvme_io": false, 00:15:34.706 "nvme_io_md": false, 00:15:34.706 "write_zeroes": true, 00:15:34.706 "zcopy": true, 00:15:34.706 "get_zone_info": false, 00:15:34.706 "zone_management": false, 00:15:34.706 "zone_append": false, 00:15:34.706 "compare": false, 00:15:34.706 "compare_and_write": false, 00:15:34.706 "abort": true, 00:15:34.706 "seek_hole": false, 00:15:34.706 "seek_data": false, 00:15:34.706 "copy": true, 00:15:34.706 "nvme_iov_md": false 00:15:34.706 }, 00:15:34.706 "memory_domains": [ 00:15:34.706 { 00:15:34.706 "dma_device_id": "system", 00:15:34.706 "dma_device_type": 1 00:15:34.706 }, 00:15:34.706 { 00:15:34.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.706 "dma_device_type": 2 00:15:34.706 } 00:15:34.706 ], 00:15:34.706 "driver_specific": {} 00:15:34.706 }' 00:15:34.706 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.706 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.706 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.706 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.706 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.706 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.706 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.706 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.964 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.964 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.964 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.964 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.964 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.964 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.964 11:26:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:35.222 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.222 "name": "BaseBdev3", 00:15:35.222 "aliases": [ 00:15:35.222 "49a3026f-e560-4a32-886e-326cf9c1c41c" 00:15:35.222 ], 00:15:35.222 "product_name": "Malloc disk", 00:15:35.222 "block_size": 512, 00:15:35.222 "num_blocks": 65536, 00:15:35.222 "uuid": "49a3026f-e560-4a32-886e-326cf9c1c41c", 00:15:35.222 "assigned_rate_limits": { 00:15:35.222 "rw_ios_per_sec": 0, 00:15:35.222 "rw_mbytes_per_sec": 0, 00:15:35.222 "r_mbytes_per_sec": 0, 00:15:35.222 "w_mbytes_per_sec": 0 00:15:35.222 }, 00:15:35.222 "claimed": true, 00:15:35.222 "claim_type": "exclusive_write", 00:15:35.222 "zoned": false, 00:15:35.222 "supported_io_types": { 00:15:35.222 "read": true, 00:15:35.222 "write": true, 00:15:35.222 "unmap": true, 00:15:35.222 "flush": true, 00:15:35.222 "reset": true, 00:15:35.222 "nvme_admin": false, 00:15:35.222 "nvme_io": false, 00:15:35.222 "nvme_io_md": false, 00:15:35.222 "write_zeroes": true, 00:15:35.222 "zcopy": true, 00:15:35.222 "get_zone_info": false, 00:15:35.222 "zone_management": false, 00:15:35.222 "zone_append": false, 00:15:35.222 "compare": false, 00:15:35.222 "compare_and_write": false, 00:15:35.222 "abort": true, 00:15:35.222 "seek_hole": false, 00:15:35.222 "seek_data": false, 00:15:35.222 "copy": true, 00:15:35.222 "nvme_iov_md": false 00:15:35.222 }, 00:15:35.222 "memory_domains": [ 00:15:35.222 { 00:15:35.222 "dma_device_id": "system", 00:15:35.222 "dma_device_type": 1 00:15:35.222 }, 00:15:35.222 { 00:15:35.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.222 "dma_device_type": 2 00:15:35.222 } 00:15:35.222 ], 00:15:35.222 "driver_specific": {} 00:15:35.222 }' 00:15:35.222 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.222 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.480 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.480 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.480 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.480 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.480 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.480 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.753 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.753 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.753 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.753 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.753 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:35.753 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:35.753 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:36.011 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:36.011 "name": "BaseBdev4", 00:15:36.011 "aliases": [ 00:15:36.011 "1a707421-8259-4ae1-a824-134ee7563405" 00:15:36.011 ], 00:15:36.011 "product_name": "Malloc disk", 00:15:36.011 "block_size": 512, 00:15:36.011 "num_blocks": 65536, 00:15:36.011 "uuid": "1a707421-8259-4ae1-a824-134ee7563405", 00:15:36.011 "assigned_rate_limits": { 00:15:36.011 "rw_ios_per_sec": 0, 00:15:36.011 "rw_mbytes_per_sec": 0, 00:15:36.011 "r_mbytes_per_sec": 0, 00:15:36.011 "w_mbytes_per_sec": 0 00:15:36.011 }, 00:15:36.011 "claimed": true, 00:15:36.011 "claim_type": "exclusive_write", 00:15:36.011 "zoned": false, 00:15:36.011 "supported_io_types": { 00:15:36.011 "read": true, 00:15:36.011 "write": true, 00:15:36.011 "unmap": true, 00:15:36.011 "flush": true, 00:15:36.011 "reset": true, 00:15:36.011 "nvme_admin": false, 00:15:36.011 "nvme_io": false, 00:15:36.011 "nvme_io_md": false, 00:15:36.011 "write_zeroes": true, 00:15:36.011 "zcopy": true, 00:15:36.011 "get_zone_info": false, 00:15:36.011 "zone_management": false, 00:15:36.011 "zone_append": false, 00:15:36.011 "compare": false, 00:15:36.011 "compare_and_write": false, 00:15:36.011 "abort": true, 00:15:36.011 "seek_hole": false, 00:15:36.011 "seek_data": false, 00:15:36.011 "copy": true, 00:15:36.011 "nvme_iov_md": false 00:15:36.011 }, 00:15:36.011 "memory_domains": [ 00:15:36.011 { 00:15:36.011 "dma_device_id": "system", 00:15:36.011 "dma_device_type": 1 00:15:36.011 }, 00:15:36.011 { 00:15:36.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.011 "dma_device_type": 2 00:15:36.011 } 00:15:36.011 ], 00:15:36.011 "driver_specific": {} 00:15:36.011 }' 00:15:36.011 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.011 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:36.011 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:36.011 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.011 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:36.269 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:36.269 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.269 11:26:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:36.269 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:36.269 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.269 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:36.269 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:36.269 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:36.527 [2024-07-25 11:26:52.401942] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:36.527 [2024-07-25 11:26:52.402014] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:36.527 [2024-07-25 11:26:52.402139] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.527 [2024-07-25 11:26:52.402224] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.527 [2024-07-25 11:26:52.402244] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 75987 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75987 ']' 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75987 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75987 00:15:36.786 killing process with pid 75987 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75987' 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75987 00:15:36.786 11:26:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75987 00:15:36.786 [2024-07-25 11:26:52.449109] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.045 [2024-07-25 11:26:52.814727] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:38.418 00:15:38.418 real 0m38.711s 00:15:38.418 user 1m11.023s 00:15:38.418 sys 0m4.991s 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.418 ************************************ 00:15:38.418 END TEST raid_state_function_test 00:15:38.418 ************************************ 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.418 11:26:54 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:38.418 11:26:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:38.418 11:26:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.418 11:26:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.418 ************************************ 00:15:38.418 START TEST raid_state_function_test_sb 00:15:38.418 ************************************ 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:38.418 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:38.419 Process raid pid: 77099 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=77099 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 77099' 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 77099 /var/tmp/spdk-raid.sock 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77099 ']' 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:38.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:38.419 11:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.419 [2024-07-25 11:26:54.157308] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:15:38.419 [2024-07-25 11:26:54.157472] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.677 [2024-07-25 11:26:54.324793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.934 [2024-07-25 11:26:54.564507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.934 [2024-07-25 11:26:54.769410] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.934 [2024-07-25 11:26:54.769468] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:39.500 [2024-07-25 11:26:55.355329] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.500 [2024-07-25 11:26:55.355415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.500 [2024-07-25 11:26:55.355436] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.500 [2024-07-25 11:26:55.355451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.500 [2024-07-25 11:26:55.355466] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.500 [2024-07-25 11:26:55.355479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.500 [2024-07-25 11:26:55.355492] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:39.500 [2024-07-25 11:26:55.355503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.500 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.065 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.065 "name": "Existed_Raid", 00:15:40.065 "uuid": "2badbb82-1cc3-49bf-af80-db8980a41786", 00:15:40.065 "strip_size_kb": 64, 00:15:40.065 "state": "configuring", 00:15:40.065 "raid_level": "raid0", 00:15:40.065 "superblock": true, 00:15:40.065 "num_base_bdevs": 4, 00:15:40.065 "num_base_bdevs_discovered": 0, 00:15:40.065 "num_base_bdevs_operational": 4, 00:15:40.065 "base_bdevs_list": [ 00:15:40.065 { 00:15:40.065 "name": "BaseBdev1", 00:15:40.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.065 "is_configured": false, 00:15:40.065 "data_offset": 0, 00:15:40.065 "data_size": 0 00:15:40.065 }, 00:15:40.065 { 00:15:40.065 "name": "BaseBdev2", 00:15:40.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.065 "is_configured": false, 00:15:40.065 "data_offset": 0, 00:15:40.065 "data_size": 0 00:15:40.065 }, 00:15:40.065 { 00:15:40.065 "name": "BaseBdev3", 00:15:40.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.065 "is_configured": false, 00:15:40.065 "data_offset": 0, 00:15:40.065 "data_size": 0 00:15:40.065 }, 00:15:40.065 { 00:15:40.065 "name": "BaseBdev4", 00:15:40.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.065 "is_configured": false, 00:15:40.065 "data_offset": 0, 00:15:40.065 "data_size": 0 00:15:40.065 } 00:15:40.065 ] 00:15:40.065 }' 00:15:40.065 11:26:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.065 11:26:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.630 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:40.888 [2024-07-25 11:26:56.587456] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.888 [2024-07-25 11:26:56.587515] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:40.888 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:41.146 [2024-07-25 11:26:56.831574] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.146 [2024-07-25 11:26:56.831653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.146 [2024-07-25 11:26:56.831673] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:41.146 [2024-07-25 11:26:56.831688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:41.146 [2024-07-25 11:26:56.831701] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:41.146 [2024-07-25 11:26:56.831713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:41.146 [2024-07-25 11:26:56.831725] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:41.146 [2024-07-25 11:26:56.831736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:41.146 11:26:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:41.405 [2024-07-25 11:26:57.103953] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.405 BaseBdev1 00:15:41.405 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:41.405 11:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:41.405 11:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:41.405 11:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:41.405 11:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:41.405 11:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:41.405 11:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.662 11:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:41.920 [ 00:15:41.920 { 00:15:41.920 "name": "BaseBdev1", 00:15:41.920 "aliases": [ 00:15:41.920 "633037a3-e18c-4f11-be6f-e419d85eefda" 00:15:41.920 ], 00:15:41.920 "product_name": "Malloc disk", 00:15:41.920 "block_size": 512, 00:15:41.920 "num_blocks": 65536, 00:15:41.920 "uuid": "633037a3-e18c-4f11-be6f-e419d85eefda", 00:15:41.920 "assigned_rate_limits": { 00:15:41.920 "rw_ios_per_sec": 0, 00:15:41.920 "rw_mbytes_per_sec": 0, 00:15:41.920 "r_mbytes_per_sec": 0, 00:15:41.920 "w_mbytes_per_sec": 0 00:15:41.920 }, 00:15:41.920 "claimed": true, 00:15:41.920 "claim_type": "exclusive_write", 00:15:41.920 "zoned": false, 00:15:41.920 "supported_io_types": { 00:15:41.920 "read": true, 00:15:41.920 "write": true, 00:15:41.920 "unmap": true, 00:15:41.920 "flush": true, 00:15:41.920 "reset": true, 00:15:41.920 "nvme_admin": false, 00:15:41.920 "nvme_io": false, 00:15:41.920 "nvme_io_md": false, 00:15:41.920 "write_zeroes": true, 00:15:41.920 "zcopy": true, 00:15:41.920 "get_zone_info": false, 00:15:41.920 "zone_management": false, 00:15:41.920 "zone_append": false, 00:15:41.920 "compare": false, 00:15:41.920 "compare_and_write": false, 00:15:41.920 "abort": true, 00:15:41.920 "seek_hole": false, 00:15:41.920 "seek_data": false, 00:15:41.920 "copy": true, 00:15:41.920 "nvme_iov_md": false 00:15:41.920 }, 00:15:41.920 "memory_domains": [ 00:15:41.920 { 00:15:41.920 "dma_device_id": "system", 00:15:41.920 "dma_device_type": 1 00:15:41.920 }, 00:15:41.920 { 00:15:41.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.920 "dma_device_type": 2 00:15:41.920 } 00:15:41.920 ], 00:15:41.920 "driver_specific": {} 00:15:41.920 } 00:15:41.920 ] 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.920 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.177 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.177 "name": "Existed_Raid", 00:15:42.177 "uuid": "e52d622f-da45-4277-b80c-d150d640244d", 00:15:42.177 "strip_size_kb": 64, 00:15:42.177 "state": "configuring", 00:15:42.177 "raid_level": "raid0", 00:15:42.177 "superblock": true, 00:15:42.177 "num_base_bdevs": 4, 00:15:42.177 "num_base_bdevs_discovered": 1, 00:15:42.177 "num_base_bdevs_operational": 4, 00:15:42.177 "base_bdevs_list": [ 00:15:42.177 { 00:15:42.177 "name": "BaseBdev1", 00:15:42.177 "uuid": "633037a3-e18c-4f11-be6f-e419d85eefda", 00:15:42.177 "is_configured": true, 00:15:42.177 "data_offset": 2048, 00:15:42.178 "data_size": 63488 00:15:42.178 }, 00:15:42.178 { 00:15:42.178 "name": "BaseBdev2", 00:15:42.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.178 "is_configured": false, 00:15:42.178 "data_offset": 0, 00:15:42.178 "data_size": 0 00:15:42.178 }, 00:15:42.178 { 00:15:42.178 "name": "BaseBdev3", 00:15:42.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.178 "is_configured": false, 00:15:42.178 "data_offset": 0, 00:15:42.178 "data_size": 0 00:15:42.178 }, 00:15:42.178 { 00:15:42.178 "name": "BaseBdev4", 00:15:42.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.178 "is_configured": false, 00:15:42.178 "data_offset": 0, 00:15:42.178 "data_size": 0 00:15:42.178 } 00:15:42.178 ] 00:15:42.178 }' 00:15:42.178 11:26:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.178 11:26:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.744 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:43.001 [2024-07-25 11:26:58.804436] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.001 [2024-07-25 11:26:58.804706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:43.001 11:26:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:43.260 [2024-07-25 11:26:59.032589] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.260 [2024-07-25 11:26:59.035103] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.260 [2024-07-25 11:26:59.035309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.260 [2024-07-25 11:26:59.035343] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:43.260 [2024-07-25 11:26:59.035360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:43.260 [2024-07-25 11:26:59.035377] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:43.260 [2024-07-25 11:26:59.035389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.260 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.518 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.518 "name": "Existed_Raid", 00:15:43.518 "uuid": "09e7e906-899a-45ed-9aac-0a94542369cd", 00:15:43.518 "strip_size_kb": 64, 00:15:43.518 "state": "configuring", 00:15:43.518 "raid_level": "raid0", 00:15:43.518 "superblock": true, 00:15:43.518 "num_base_bdevs": 4, 00:15:43.518 "num_base_bdevs_discovered": 1, 00:15:43.518 "num_base_bdevs_operational": 4, 00:15:43.518 "base_bdevs_list": [ 00:15:43.518 { 00:15:43.518 "name": "BaseBdev1", 00:15:43.519 "uuid": "633037a3-e18c-4f11-be6f-e419d85eefda", 00:15:43.519 "is_configured": true, 00:15:43.519 "data_offset": 2048, 00:15:43.519 "data_size": 63488 00:15:43.519 }, 00:15:43.519 { 00:15:43.519 "name": "BaseBdev2", 00:15:43.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.519 "is_configured": false, 00:15:43.519 "data_offset": 0, 00:15:43.519 "data_size": 0 00:15:43.519 }, 00:15:43.519 { 00:15:43.519 "name": "BaseBdev3", 00:15:43.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.519 "is_configured": false, 00:15:43.519 "data_offset": 0, 00:15:43.519 "data_size": 0 00:15:43.519 }, 00:15:43.519 { 00:15:43.519 "name": "BaseBdev4", 00:15:43.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.519 "is_configured": false, 00:15:43.519 "data_offset": 0, 00:15:43.519 "data_size": 0 00:15:43.519 } 00:15:43.519 ] 00:15:43.519 }' 00:15:43.519 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.519 11:26:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.084 11:26:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:44.652 [2024-07-25 11:27:00.292728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.652 BaseBdev2 00:15:44.652 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:44.652 11:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:44.652 11:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:44.652 11:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:44.652 11:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:44.652 11:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:44.652 11:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.910 11:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:44.910 [ 00:15:44.910 { 00:15:44.910 "name": "BaseBdev2", 00:15:44.910 "aliases": [ 00:15:44.910 "9bb0b698-70a4-4c60-ba2d-9a3133faa180" 00:15:44.910 ], 00:15:44.910 "product_name": "Malloc disk", 00:15:44.910 "block_size": 512, 00:15:44.910 "num_blocks": 65536, 00:15:44.910 "uuid": "9bb0b698-70a4-4c60-ba2d-9a3133faa180", 00:15:44.910 "assigned_rate_limits": { 00:15:44.910 "rw_ios_per_sec": 0, 00:15:44.910 "rw_mbytes_per_sec": 0, 00:15:44.910 "r_mbytes_per_sec": 0, 00:15:44.910 "w_mbytes_per_sec": 0 00:15:44.910 }, 00:15:44.910 "claimed": true, 00:15:44.910 "claim_type": "exclusive_write", 00:15:44.910 "zoned": false, 00:15:44.910 "supported_io_types": { 00:15:44.910 "read": true, 00:15:44.910 "write": true, 00:15:44.910 "unmap": true, 00:15:44.910 "flush": true, 00:15:44.910 "reset": true, 00:15:44.910 "nvme_admin": false, 00:15:44.910 "nvme_io": false, 00:15:44.910 "nvme_io_md": false, 00:15:44.910 "write_zeroes": true, 00:15:44.910 "zcopy": true, 00:15:44.910 "get_zone_info": false, 00:15:44.910 "zone_management": false, 00:15:44.910 "zone_append": false, 00:15:44.910 "compare": false, 00:15:44.910 "compare_and_write": false, 00:15:44.910 "abort": true, 00:15:44.910 "seek_hole": false, 00:15:44.910 "seek_data": false, 00:15:44.910 "copy": true, 00:15:44.910 "nvme_iov_md": false 00:15:44.910 }, 00:15:44.910 "memory_domains": [ 00:15:44.910 { 00:15:44.910 "dma_device_id": "system", 00:15:44.910 "dma_device_type": 1 00:15:44.910 }, 00:15:44.910 { 00:15:44.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.910 "dma_device_type": 2 00:15:44.910 } 00:15:44.910 ], 00:15:44.910 "driver_specific": {} 00:15:44.910 } 00:15:44.910 ] 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.168 11:27:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.426 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.426 "name": "Existed_Raid", 00:15:45.426 "uuid": "09e7e906-899a-45ed-9aac-0a94542369cd", 00:15:45.426 "strip_size_kb": 64, 00:15:45.426 "state": "configuring", 00:15:45.426 "raid_level": "raid0", 00:15:45.426 "superblock": true, 00:15:45.426 "num_base_bdevs": 4, 00:15:45.426 "num_base_bdevs_discovered": 2, 00:15:45.426 "num_base_bdevs_operational": 4, 00:15:45.426 "base_bdevs_list": [ 00:15:45.426 { 00:15:45.426 "name": "BaseBdev1", 00:15:45.426 "uuid": "633037a3-e18c-4f11-be6f-e419d85eefda", 00:15:45.426 "is_configured": true, 00:15:45.426 "data_offset": 2048, 00:15:45.426 "data_size": 63488 00:15:45.426 }, 00:15:45.426 { 00:15:45.426 "name": "BaseBdev2", 00:15:45.426 "uuid": "9bb0b698-70a4-4c60-ba2d-9a3133faa180", 00:15:45.426 "is_configured": true, 00:15:45.426 "data_offset": 2048, 00:15:45.426 "data_size": 63488 00:15:45.426 }, 00:15:45.426 { 00:15:45.426 "name": "BaseBdev3", 00:15:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.426 "is_configured": false, 00:15:45.426 "data_offset": 0, 00:15:45.426 "data_size": 0 00:15:45.426 }, 00:15:45.426 { 00:15:45.426 "name": "BaseBdev4", 00:15:45.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.426 "is_configured": false, 00:15:45.426 "data_offset": 0, 00:15:45.426 "data_size": 0 00:15:45.426 } 00:15:45.426 ] 00:15:45.426 }' 00:15:45.426 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.426 11:27:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.991 11:27:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:46.250 [2024-07-25 11:27:02.039005] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.250 BaseBdev3 00:15:46.250 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:46.250 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:46.250 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.250 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:46.250 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.250 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.250 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:46.509 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:46.767 [ 00:15:46.767 { 00:15:46.767 "name": "BaseBdev3", 00:15:46.767 "aliases": [ 00:15:46.767 "5a6aea9f-dfcc-44ab-ab7f-a37d22d7dab7" 00:15:46.767 ], 00:15:46.767 "product_name": "Malloc disk", 00:15:46.767 "block_size": 512, 00:15:46.767 "num_blocks": 65536, 00:15:46.767 "uuid": "5a6aea9f-dfcc-44ab-ab7f-a37d22d7dab7", 00:15:46.767 "assigned_rate_limits": { 00:15:46.767 "rw_ios_per_sec": 0, 00:15:46.767 "rw_mbytes_per_sec": 0, 00:15:46.767 "r_mbytes_per_sec": 0, 00:15:46.767 "w_mbytes_per_sec": 0 00:15:46.767 }, 00:15:46.767 "claimed": true, 00:15:46.767 "claim_type": "exclusive_write", 00:15:46.767 "zoned": false, 00:15:46.767 "supported_io_types": { 00:15:46.767 "read": true, 00:15:46.767 "write": true, 00:15:46.767 "unmap": true, 00:15:46.767 "flush": true, 00:15:46.767 "reset": true, 00:15:46.767 "nvme_admin": false, 00:15:46.767 "nvme_io": false, 00:15:46.767 "nvme_io_md": false, 00:15:46.767 "write_zeroes": true, 00:15:46.767 "zcopy": true, 00:15:46.767 "get_zone_info": false, 00:15:46.767 "zone_management": false, 00:15:46.767 "zone_append": false, 00:15:46.767 "compare": false, 00:15:46.767 "compare_and_write": false, 00:15:46.767 "abort": true, 00:15:46.767 "seek_hole": false, 00:15:46.767 "seek_data": false, 00:15:46.767 "copy": true, 00:15:46.767 "nvme_iov_md": false 00:15:46.767 }, 00:15:46.767 "memory_domains": [ 00:15:46.767 { 00:15:46.767 "dma_device_id": "system", 00:15:46.767 "dma_device_type": 1 00:15:46.767 }, 00:15:46.767 { 00:15:46.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.767 "dma_device_type": 2 00:15:46.767 } 00:15:46.767 ], 00:15:46.767 "driver_specific": {} 00:15:46.767 } 00:15:46.767 ] 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.767 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.026 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.026 "name": "Existed_Raid", 00:15:47.026 "uuid": "09e7e906-899a-45ed-9aac-0a94542369cd", 00:15:47.026 "strip_size_kb": 64, 00:15:47.026 "state": "configuring", 00:15:47.026 "raid_level": "raid0", 00:15:47.026 "superblock": true, 00:15:47.026 "num_base_bdevs": 4, 00:15:47.026 "num_base_bdevs_discovered": 3, 00:15:47.026 "num_base_bdevs_operational": 4, 00:15:47.026 "base_bdevs_list": [ 00:15:47.026 { 00:15:47.026 "name": "BaseBdev1", 00:15:47.026 "uuid": "633037a3-e18c-4f11-be6f-e419d85eefda", 00:15:47.026 "is_configured": true, 00:15:47.026 "data_offset": 2048, 00:15:47.026 "data_size": 63488 00:15:47.026 }, 00:15:47.026 { 00:15:47.026 "name": "BaseBdev2", 00:15:47.026 "uuid": "9bb0b698-70a4-4c60-ba2d-9a3133faa180", 00:15:47.026 "is_configured": true, 00:15:47.026 "data_offset": 2048, 00:15:47.026 "data_size": 63488 00:15:47.026 }, 00:15:47.026 { 00:15:47.026 "name": "BaseBdev3", 00:15:47.026 "uuid": "5a6aea9f-dfcc-44ab-ab7f-a37d22d7dab7", 00:15:47.026 "is_configured": true, 00:15:47.026 "data_offset": 2048, 00:15:47.026 "data_size": 63488 00:15:47.026 }, 00:15:47.026 { 00:15:47.026 "name": "BaseBdev4", 00:15:47.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.026 "is_configured": false, 00:15:47.026 "data_offset": 0, 00:15:47.026 "data_size": 0 00:15:47.026 } 00:15:47.026 ] 00:15:47.026 }' 00:15:47.026 11:27:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.026 11:27:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.641 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:47.899 [2024-07-25 11:27:03.705574] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:47.899 [2024-07-25 11:27:03.706198] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:47.899 [2024-07-25 11:27:03.706345] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:47.899 [2024-07-25 11:27:03.706736] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:47.899 BaseBdev4 00:15:47.899 [2024-07-25 11:27:03.707080] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:47.899 [2024-07-25 11:27:03.707099] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:47.899 [2024-07-25 11:27:03.707269] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.899 11:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:47.899 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:47.899 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:47.899 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:47.899 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:47.899 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:47.899 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.157 11:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:48.417 [ 00:15:48.417 { 00:15:48.417 "name": "BaseBdev4", 00:15:48.417 "aliases": [ 00:15:48.417 "43f3d44e-905a-4bae-bf48-2dbd1e2262d5" 00:15:48.417 ], 00:15:48.417 "product_name": "Malloc disk", 00:15:48.417 "block_size": 512, 00:15:48.417 "num_blocks": 65536, 00:15:48.417 "uuid": "43f3d44e-905a-4bae-bf48-2dbd1e2262d5", 00:15:48.417 "assigned_rate_limits": { 00:15:48.417 "rw_ios_per_sec": 0, 00:15:48.417 "rw_mbytes_per_sec": 0, 00:15:48.417 "r_mbytes_per_sec": 0, 00:15:48.417 "w_mbytes_per_sec": 0 00:15:48.417 }, 00:15:48.417 "claimed": true, 00:15:48.417 "claim_type": "exclusive_write", 00:15:48.417 "zoned": false, 00:15:48.417 "supported_io_types": { 00:15:48.417 "read": true, 00:15:48.417 "write": true, 00:15:48.417 "unmap": true, 00:15:48.417 "flush": true, 00:15:48.417 "reset": true, 00:15:48.417 "nvme_admin": false, 00:15:48.417 "nvme_io": false, 00:15:48.417 "nvme_io_md": false, 00:15:48.417 "write_zeroes": true, 00:15:48.417 "zcopy": true, 00:15:48.417 "get_zone_info": false, 00:15:48.417 "zone_management": false, 00:15:48.417 "zone_append": false, 00:15:48.417 "compare": false, 00:15:48.417 "compare_and_write": false, 00:15:48.417 "abort": true, 00:15:48.417 "seek_hole": false, 00:15:48.417 "seek_data": false, 00:15:48.417 "copy": true, 00:15:48.417 "nvme_iov_md": false 00:15:48.417 }, 00:15:48.417 "memory_domains": [ 00:15:48.417 { 00:15:48.417 "dma_device_id": "system", 00:15:48.417 "dma_device_type": 1 00:15:48.417 }, 00:15:48.417 { 00:15:48.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.417 "dma_device_type": 2 00:15:48.417 } 00:15:48.417 ], 00:15:48.417 "driver_specific": {} 00:15:48.417 } 00:15:48.417 ] 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.417 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.676 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.676 "name": "Existed_Raid", 00:15:48.676 "uuid": "09e7e906-899a-45ed-9aac-0a94542369cd", 00:15:48.676 "strip_size_kb": 64, 00:15:48.676 "state": "online", 00:15:48.676 "raid_level": "raid0", 00:15:48.676 "superblock": true, 00:15:48.676 "num_base_bdevs": 4, 00:15:48.676 "num_base_bdevs_discovered": 4, 00:15:48.676 "num_base_bdevs_operational": 4, 00:15:48.676 "base_bdevs_list": [ 00:15:48.676 { 00:15:48.676 "name": "BaseBdev1", 00:15:48.676 "uuid": "633037a3-e18c-4f11-be6f-e419d85eefda", 00:15:48.676 "is_configured": true, 00:15:48.676 "data_offset": 2048, 00:15:48.676 "data_size": 63488 00:15:48.676 }, 00:15:48.676 { 00:15:48.676 "name": "BaseBdev2", 00:15:48.676 "uuid": "9bb0b698-70a4-4c60-ba2d-9a3133faa180", 00:15:48.676 "is_configured": true, 00:15:48.676 "data_offset": 2048, 00:15:48.676 "data_size": 63488 00:15:48.676 }, 00:15:48.676 { 00:15:48.676 "name": "BaseBdev3", 00:15:48.676 "uuid": "5a6aea9f-dfcc-44ab-ab7f-a37d22d7dab7", 00:15:48.676 "is_configured": true, 00:15:48.676 "data_offset": 2048, 00:15:48.676 "data_size": 63488 00:15:48.676 }, 00:15:48.676 { 00:15:48.676 "name": "BaseBdev4", 00:15:48.676 "uuid": "43f3d44e-905a-4bae-bf48-2dbd1e2262d5", 00:15:48.676 "is_configured": true, 00:15:48.676 "data_offset": 2048, 00:15:48.676 "data_size": 63488 00:15:48.676 } 00:15:48.676 ] 00:15:48.676 }' 00:15:48.676 11:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.676 11:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:49.612 [2024-07-25 11:27:05.346454] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:49.612 "name": "Existed_Raid", 00:15:49.612 "aliases": [ 00:15:49.612 "09e7e906-899a-45ed-9aac-0a94542369cd" 00:15:49.612 ], 00:15:49.612 "product_name": "Raid Volume", 00:15:49.612 "block_size": 512, 00:15:49.612 "num_blocks": 253952, 00:15:49.612 "uuid": "09e7e906-899a-45ed-9aac-0a94542369cd", 00:15:49.612 "assigned_rate_limits": { 00:15:49.612 "rw_ios_per_sec": 0, 00:15:49.612 "rw_mbytes_per_sec": 0, 00:15:49.612 "r_mbytes_per_sec": 0, 00:15:49.612 "w_mbytes_per_sec": 0 00:15:49.612 }, 00:15:49.612 "claimed": false, 00:15:49.612 "zoned": false, 00:15:49.612 "supported_io_types": { 00:15:49.612 "read": true, 00:15:49.612 "write": true, 00:15:49.612 "unmap": true, 00:15:49.612 "flush": true, 00:15:49.612 "reset": true, 00:15:49.612 "nvme_admin": false, 00:15:49.612 "nvme_io": false, 00:15:49.612 "nvme_io_md": false, 00:15:49.612 "write_zeroes": true, 00:15:49.612 "zcopy": false, 00:15:49.612 "get_zone_info": false, 00:15:49.612 "zone_management": false, 00:15:49.612 "zone_append": false, 00:15:49.612 "compare": false, 00:15:49.612 "compare_and_write": false, 00:15:49.612 "abort": false, 00:15:49.612 "seek_hole": false, 00:15:49.612 "seek_data": false, 00:15:49.612 "copy": false, 00:15:49.612 "nvme_iov_md": false 00:15:49.612 }, 00:15:49.612 "memory_domains": [ 00:15:49.612 { 00:15:49.612 "dma_device_id": "system", 00:15:49.612 "dma_device_type": 1 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.612 "dma_device_type": 2 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "dma_device_id": "system", 00:15:49.612 "dma_device_type": 1 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.612 "dma_device_type": 2 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "dma_device_id": "system", 00:15:49.612 "dma_device_type": 1 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.612 "dma_device_type": 2 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "dma_device_id": "system", 00:15:49.612 "dma_device_type": 1 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.612 "dma_device_type": 2 00:15:49.612 } 00:15:49.612 ], 00:15:49.612 "driver_specific": { 00:15:49.612 "raid": { 00:15:49.612 "uuid": "09e7e906-899a-45ed-9aac-0a94542369cd", 00:15:49.612 "strip_size_kb": 64, 00:15:49.612 "state": "online", 00:15:49.612 "raid_level": "raid0", 00:15:49.612 "superblock": true, 00:15:49.612 "num_base_bdevs": 4, 00:15:49.612 "num_base_bdevs_discovered": 4, 00:15:49.612 "num_base_bdevs_operational": 4, 00:15:49.612 "base_bdevs_list": [ 00:15:49.612 { 00:15:49.612 "name": "BaseBdev1", 00:15:49.612 "uuid": "633037a3-e18c-4f11-be6f-e419d85eefda", 00:15:49.612 "is_configured": true, 00:15:49.612 "data_offset": 2048, 00:15:49.612 "data_size": 63488 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "name": "BaseBdev2", 00:15:49.612 "uuid": "9bb0b698-70a4-4c60-ba2d-9a3133faa180", 00:15:49.612 "is_configured": true, 00:15:49.612 "data_offset": 2048, 00:15:49.612 "data_size": 63488 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "name": "BaseBdev3", 00:15:49.612 "uuid": "5a6aea9f-dfcc-44ab-ab7f-a37d22d7dab7", 00:15:49.612 "is_configured": true, 00:15:49.612 "data_offset": 2048, 00:15:49.612 "data_size": 63488 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "name": "BaseBdev4", 00:15:49.612 "uuid": "43f3d44e-905a-4bae-bf48-2dbd1e2262d5", 00:15:49.612 "is_configured": true, 00:15:49.612 "data_offset": 2048, 00:15:49.612 "data_size": 63488 00:15:49.612 } 00:15:49.612 ] 00:15:49.612 } 00:15:49.612 } 00:15:49.612 }' 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:49.612 BaseBdev2 00:15:49.612 BaseBdev3 00:15:49.612 BaseBdev4' 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:49.612 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:49.870 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:49.870 "name": "BaseBdev1", 00:15:49.870 "aliases": [ 00:15:49.870 "633037a3-e18c-4f11-be6f-e419d85eefda" 00:15:49.870 ], 00:15:49.870 "product_name": "Malloc disk", 00:15:49.870 "block_size": 512, 00:15:49.870 "num_blocks": 65536, 00:15:49.870 "uuid": "633037a3-e18c-4f11-be6f-e419d85eefda", 00:15:49.870 "assigned_rate_limits": { 00:15:49.870 "rw_ios_per_sec": 0, 00:15:49.871 "rw_mbytes_per_sec": 0, 00:15:49.871 "r_mbytes_per_sec": 0, 00:15:49.871 "w_mbytes_per_sec": 0 00:15:49.871 }, 00:15:49.871 "claimed": true, 00:15:49.871 "claim_type": "exclusive_write", 00:15:49.871 "zoned": false, 00:15:49.871 "supported_io_types": { 00:15:49.871 "read": true, 00:15:49.871 "write": true, 00:15:49.871 "unmap": true, 00:15:49.871 "flush": true, 00:15:49.871 "reset": true, 00:15:49.871 "nvme_admin": false, 00:15:49.871 "nvme_io": false, 00:15:49.871 "nvme_io_md": false, 00:15:49.871 "write_zeroes": true, 00:15:49.871 "zcopy": true, 00:15:49.871 "get_zone_info": false, 00:15:49.871 "zone_management": false, 00:15:49.871 "zone_append": false, 00:15:49.871 "compare": false, 00:15:49.871 "compare_and_write": false, 00:15:49.871 "abort": true, 00:15:49.871 "seek_hole": false, 00:15:49.871 "seek_data": false, 00:15:49.871 "copy": true, 00:15:49.871 "nvme_iov_md": false 00:15:49.871 }, 00:15:49.871 "memory_domains": [ 00:15:49.871 { 00:15:49.871 "dma_device_id": "system", 00:15:49.871 "dma_device_type": 1 00:15:49.871 }, 00:15:49.871 { 00:15:49.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.871 "dma_device_type": 2 00:15:49.871 } 00:15:49.871 ], 00:15:49.871 "driver_specific": {} 00:15:49.871 }' 00:15:49.871 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.871 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.129 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:50.129 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.129 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.130 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:50.130 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.130 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.130 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:50.130 11:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.388 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.388 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:50.388 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:50.388 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:50.388 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:50.647 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:50.647 "name": "BaseBdev2", 00:15:50.647 "aliases": [ 00:15:50.647 "9bb0b698-70a4-4c60-ba2d-9a3133faa180" 00:15:50.647 ], 00:15:50.647 "product_name": "Malloc disk", 00:15:50.647 "block_size": 512, 00:15:50.647 "num_blocks": 65536, 00:15:50.647 "uuid": "9bb0b698-70a4-4c60-ba2d-9a3133faa180", 00:15:50.647 "assigned_rate_limits": { 00:15:50.647 "rw_ios_per_sec": 0, 00:15:50.647 "rw_mbytes_per_sec": 0, 00:15:50.647 "r_mbytes_per_sec": 0, 00:15:50.647 "w_mbytes_per_sec": 0 00:15:50.647 }, 00:15:50.647 "claimed": true, 00:15:50.647 "claim_type": "exclusive_write", 00:15:50.647 "zoned": false, 00:15:50.647 "supported_io_types": { 00:15:50.647 "read": true, 00:15:50.647 "write": true, 00:15:50.647 "unmap": true, 00:15:50.647 "flush": true, 00:15:50.647 "reset": true, 00:15:50.647 "nvme_admin": false, 00:15:50.647 "nvme_io": false, 00:15:50.647 "nvme_io_md": false, 00:15:50.647 "write_zeroes": true, 00:15:50.647 "zcopy": true, 00:15:50.647 "get_zone_info": false, 00:15:50.647 "zone_management": false, 00:15:50.647 "zone_append": false, 00:15:50.647 "compare": false, 00:15:50.647 "compare_and_write": false, 00:15:50.647 "abort": true, 00:15:50.647 "seek_hole": false, 00:15:50.647 "seek_data": false, 00:15:50.647 "copy": true, 00:15:50.647 "nvme_iov_md": false 00:15:50.647 }, 00:15:50.647 "memory_domains": [ 00:15:50.647 { 00:15:50.647 "dma_device_id": "system", 00:15:50.647 "dma_device_type": 1 00:15:50.647 }, 00:15:50.647 { 00:15:50.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.647 "dma_device_type": 2 00:15:50.647 } 00:15:50.647 ], 00:15:50.647 "driver_specific": {} 00:15:50.647 }' 00:15:50.647 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.647 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.647 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:50.647 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.648 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.906 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:50.906 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.906 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.906 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:50.906 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.906 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.906 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:50.906 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:50.906 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:50.906 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.165 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.165 "name": "BaseBdev3", 00:15:51.165 "aliases": [ 00:15:51.165 "5a6aea9f-dfcc-44ab-ab7f-a37d22d7dab7" 00:15:51.165 ], 00:15:51.165 "product_name": "Malloc disk", 00:15:51.165 "block_size": 512, 00:15:51.165 "num_blocks": 65536, 00:15:51.165 "uuid": "5a6aea9f-dfcc-44ab-ab7f-a37d22d7dab7", 00:15:51.165 "assigned_rate_limits": { 00:15:51.165 "rw_ios_per_sec": 0, 00:15:51.165 "rw_mbytes_per_sec": 0, 00:15:51.165 "r_mbytes_per_sec": 0, 00:15:51.165 "w_mbytes_per_sec": 0 00:15:51.165 }, 00:15:51.165 "claimed": true, 00:15:51.165 "claim_type": "exclusive_write", 00:15:51.165 "zoned": false, 00:15:51.165 "supported_io_types": { 00:15:51.165 "read": true, 00:15:51.165 "write": true, 00:15:51.165 "unmap": true, 00:15:51.165 "flush": true, 00:15:51.165 "reset": true, 00:15:51.165 "nvme_admin": false, 00:15:51.165 "nvme_io": false, 00:15:51.165 "nvme_io_md": false, 00:15:51.165 "write_zeroes": true, 00:15:51.165 "zcopy": true, 00:15:51.165 "get_zone_info": false, 00:15:51.165 "zone_management": false, 00:15:51.165 "zone_append": false, 00:15:51.165 "compare": false, 00:15:51.165 "compare_and_write": false, 00:15:51.165 "abort": true, 00:15:51.165 "seek_hole": false, 00:15:51.165 "seek_data": false, 00:15:51.165 "copy": true, 00:15:51.165 "nvme_iov_md": false 00:15:51.165 }, 00:15:51.165 "memory_domains": [ 00:15:51.165 { 00:15:51.165 "dma_device_id": "system", 00:15:51.165 "dma_device_type": 1 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.165 "dma_device_type": 2 00:15:51.165 } 00:15:51.165 ], 00:15:51.165 "driver_specific": {} 00:15:51.165 }' 00:15:51.165 11:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.165 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.425 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.425 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.425 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.425 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.425 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.425 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.425 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.425 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.689 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.689 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.689 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.689 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.689 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:51.954 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.954 "name": "BaseBdev4", 00:15:51.954 "aliases": [ 00:15:51.954 "43f3d44e-905a-4bae-bf48-2dbd1e2262d5" 00:15:51.954 ], 00:15:51.954 "product_name": "Malloc disk", 00:15:51.954 "block_size": 512, 00:15:51.954 "num_blocks": 65536, 00:15:51.954 "uuid": "43f3d44e-905a-4bae-bf48-2dbd1e2262d5", 00:15:51.954 "assigned_rate_limits": { 00:15:51.954 "rw_ios_per_sec": 0, 00:15:51.954 "rw_mbytes_per_sec": 0, 00:15:51.954 "r_mbytes_per_sec": 0, 00:15:51.954 "w_mbytes_per_sec": 0 00:15:51.954 }, 00:15:51.954 "claimed": true, 00:15:51.954 "claim_type": "exclusive_write", 00:15:51.954 "zoned": false, 00:15:51.954 "supported_io_types": { 00:15:51.954 "read": true, 00:15:51.954 "write": true, 00:15:51.954 "unmap": true, 00:15:51.954 "flush": true, 00:15:51.954 "reset": true, 00:15:51.954 "nvme_admin": false, 00:15:51.954 "nvme_io": false, 00:15:51.954 "nvme_io_md": false, 00:15:51.954 "write_zeroes": true, 00:15:51.954 "zcopy": true, 00:15:51.954 "get_zone_info": false, 00:15:51.954 "zone_management": false, 00:15:51.954 "zone_append": false, 00:15:51.954 "compare": false, 00:15:51.954 "compare_and_write": false, 00:15:51.954 "abort": true, 00:15:51.954 "seek_hole": false, 00:15:51.954 "seek_data": false, 00:15:51.954 "copy": true, 00:15:51.954 "nvme_iov_md": false 00:15:51.954 }, 00:15:51.954 "memory_domains": [ 00:15:51.954 { 00:15:51.954 "dma_device_id": "system", 00:15:51.954 "dma_device_type": 1 00:15:51.954 }, 00:15:51.954 { 00:15:51.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.954 "dma_device_type": 2 00:15:51.954 } 00:15:51.954 ], 00:15:51.954 "driver_specific": {} 00:15:51.954 }' 00:15:51.954 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.954 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.954 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.954 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.954 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.954 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.954 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.221 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.221 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.221 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.221 11:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.221 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.221 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:52.490 [2024-07-25 11:27:08.310686] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.490 [2024-07-25 11:27:08.310731] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.490 [2024-07-25 11:27:08.310802] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.761 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.022 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:53.022 "name": "Existed_Raid", 00:15:53.022 "uuid": "09e7e906-899a-45ed-9aac-0a94542369cd", 00:15:53.022 "strip_size_kb": 64, 00:15:53.022 "state": "offline", 00:15:53.022 "raid_level": "raid0", 00:15:53.023 "superblock": true, 00:15:53.023 "num_base_bdevs": 4, 00:15:53.023 "num_base_bdevs_discovered": 3, 00:15:53.023 "num_base_bdevs_operational": 3, 00:15:53.023 "base_bdevs_list": [ 00:15:53.023 { 00:15:53.023 "name": null, 00:15:53.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.023 "is_configured": false, 00:15:53.023 "data_offset": 2048, 00:15:53.023 "data_size": 63488 00:15:53.023 }, 00:15:53.023 { 00:15:53.023 "name": "BaseBdev2", 00:15:53.023 "uuid": "9bb0b698-70a4-4c60-ba2d-9a3133faa180", 00:15:53.023 "is_configured": true, 00:15:53.023 "data_offset": 2048, 00:15:53.023 "data_size": 63488 00:15:53.023 }, 00:15:53.023 { 00:15:53.023 "name": "BaseBdev3", 00:15:53.023 "uuid": "5a6aea9f-dfcc-44ab-ab7f-a37d22d7dab7", 00:15:53.023 "is_configured": true, 00:15:53.023 "data_offset": 2048, 00:15:53.023 "data_size": 63488 00:15:53.023 }, 00:15:53.023 { 00:15:53.023 "name": "BaseBdev4", 00:15:53.023 "uuid": "43f3d44e-905a-4bae-bf48-2dbd1e2262d5", 00:15:53.023 "is_configured": true, 00:15:53.023 "data_offset": 2048, 00:15:53.023 "data_size": 63488 00:15:53.023 } 00:15:53.023 ] 00:15:53.023 }' 00:15:53.023 11:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:53.023 11:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.588 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:53.588 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:53.588 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.588 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:53.845 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:53.845 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.845 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:54.104 [2024-07-25 11:27:09.851505] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.104 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:54.104 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:54.104 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.104 11:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:54.363 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:54.363 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.363 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:54.621 [2024-07-25 11:27:10.467974] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:54.957 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:54.957 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:54.957 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.957 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:54.957 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:54.957 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.957 11:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:55.216 [2024-07-25 11:27:11.048879] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:55.216 [2024-07-25 11:27:11.048945] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:55.474 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:55.474 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:55.474 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.474 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:55.732 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:55.732 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:55.732 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:55.732 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:55.732 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:55.732 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:55.990 BaseBdev2 00:15:55.990 11:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:55.990 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:55.990 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:55.990 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:55.990 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:55.990 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:55.990 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.248 11:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.526 [ 00:15:56.526 { 00:15:56.526 "name": "BaseBdev2", 00:15:56.526 "aliases": [ 00:15:56.526 "a92676d7-39ed-4723-83df-f78faebe1832" 00:15:56.526 ], 00:15:56.526 "product_name": "Malloc disk", 00:15:56.526 "block_size": 512, 00:15:56.526 "num_blocks": 65536, 00:15:56.526 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:15:56.526 "assigned_rate_limits": { 00:15:56.526 "rw_ios_per_sec": 0, 00:15:56.526 "rw_mbytes_per_sec": 0, 00:15:56.526 "r_mbytes_per_sec": 0, 00:15:56.526 "w_mbytes_per_sec": 0 00:15:56.526 }, 00:15:56.526 "claimed": false, 00:15:56.526 "zoned": false, 00:15:56.526 "supported_io_types": { 00:15:56.526 "read": true, 00:15:56.526 "write": true, 00:15:56.526 "unmap": true, 00:15:56.526 "flush": true, 00:15:56.526 "reset": true, 00:15:56.526 "nvme_admin": false, 00:15:56.526 "nvme_io": false, 00:15:56.526 "nvme_io_md": false, 00:15:56.526 "write_zeroes": true, 00:15:56.526 "zcopy": true, 00:15:56.526 "get_zone_info": false, 00:15:56.526 "zone_management": false, 00:15:56.526 "zone_append": false, 00:15:56.526 "compare": false, 00:15:56.526 "compare_and_write": false, 00:15:56.526 "abort": true, 00:15:56.526 "seek_hole": false, 00:15:56.526 "seek_data": false, 00:15:56.526 "copy": true, 00:15:56.526 "nvme_iov_md": false 00:15:56.526 }, 00:15:56.526 "memory_domains": [ 00:15:56.526 { 00:15:56.526 "dma_device_id": "system", 00:15:56.526 "dma_device_type": 1 00:15:56.526 }, 00:15:56.526 { 00:15:56.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.526 "dma_device_type": 2 00:15:56.526 } 00:15:56.526 ], 00:15:56.526 "driver_specific": {} 00:15:56.526 } 00:15:56.526 ] 00:15:56.526 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:56.526 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:56.526 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:56.526 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:56.785 BaseBdev3 00:15:56.785 11:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:56.785 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:56.785 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.785 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:56.785 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.785 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.785 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:57.044 11:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:57.302 [ 00:15:57.302 { 00:15:57.302 "name": "BaseBdev3", 00:15:57.302 "aliases": [ 00:15:57.302 "e4f788f9-ce1c-4f1b-9578-4aafb883e635" 00:15:57.302 ], 00:15:57.302 "product_name": "Malloc disk", 00:15:57.302 "block_size": 512, 00:15:57.302 "num_blocks": 65536, 00:15:57.302 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:15:57.302 "assigned_rate_limits": { 00:15:57.302 "rw_ios_per_sec": 0, 00:15:57.302 "rw_mbytes_per_sec": 0, 00:15:57.302 "r_mbytes_per_sec": 0, 00:15:57.302 "w_mbytes_per_sec": 0 00:15:57.302 }, 00:15:57.302 "claimed": false, 00:15:57.302 "zoned": false, 00:15:57.302 "supported_io_types": { 00:15:57.302 "read": true, 00:15:57.302 "write": true, 00:15:57.302 "unmap": true, 00:15:57.302 "flush": true, 00:15:57.302 "reset": true, 00:15:57.302 "nvme_admin": false, 00:15:57.302 "nvme_io": false, 00:15:57.302 "nvme_io_md": false, 00:15:57.302 "write_zeroes": true, 00:15:57.302 "zcopy": true, 00:15:57.302 "get_zone_info": false, 00:15:57.302 "zone_management": false, 00:15:57.302 "zone_append": false, 00:15:57.302 "compare": false, 00:15:57.302 "compare_and_write": false, 00:15:57.302 "abort": true, 00:15:57.302 "seek_hole": false, 00:15:57.302 "seek_data": false, 00:15:57.302 "copy": true, 00:15:57.302 "nvme_iov_md": false 00:15:57.302 }, 00:15:57.302 "memory_domains": [ 00:15:57.302 { 00:15:57.302 "dma_device_id": "system", 00:15:57.302 "dma_device_type": 1 00:15:57.302 }, 00:15:57.302 { 00:15:57.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.302 "dma_device_type": 2 00:15:57.302 } 00:15:57.302 ], 00:15:57.302 "driver_specific": {} 00:15:57.302 } 00:15:57.302 ] 00:15:57.302 11:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:57.302 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:57.302 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:57.302 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:57.560 BaseBdev4 00:15:57.560 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:57.560 11:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:57.560 11:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:57.560 11:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:57.560 11:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:57.560 11:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:57.560 11:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:57.819 11:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:58.077 [ 00:15:58.077 { 00:15:58.077 "name": "BaseBdev4", 00:15:58.077 "aliases": [ 00:15:58.077 "b71ffd39-7807-4acc-95e2-30ce997c84c1" 00:15:58.077 ], 00:15:58.077 "product_name": "Malloc disk", 00:15:58.077 "block_size": 512, 00:15:58.077 "num_blocks": 65536, 00:15:58.077 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:15:58.077 "assigned_rate_limits": { 00:15:58.077 "rw_ios_per_sec": 0, 00:15:58.077 "rw_mbytes_per_sec": 0, 00:15:58.077 "r_mbytes_per_sec": 0, 00:15:58.077 "w_mbytes_per_sec": 0 00:15:58.077 }, 00:15:58.077 "claimed": false, 00:15:58.077 "zoned": false, 00:15:58.077 "supported_io_types": { 00:15:58.077 "read": true, 00:15:58.077 "write": true, 00:15:58.077 "unmap": true, 00:15:58.077 "flush": true, 00:15:58.077 "reset": true, 00:15:58.077 "nvme_admin": false, 00:15:58.077 "nvme_io": false, 00:15:58.077 "nvme_io_md": false, 00:15:58.077 "write_zeroes": true, 00:15:58.077 "zcopy": true, 00:15:58.077 "get_zone_info": false, 00:15:58.077 "zone_management": false, 00:15:58.077 "zone_append": false, 00:15:58.077 "compare": false, 00:15:58.077 "compare_and_write": false, 00:15:58.077 "abort": true, 00:15:58.077 "seek_hole": false, 00:15:58.077 "seek_data": false, 00:15:58.077 "copy": true, 00:15:58.077 "nvme_iov_md": false 00:15:58.077 }, 00:15:58.077 "memory_domains": [ 00:15:58.077 { 00:15:58.077 "dma_device_id": "system", 00:15:58.077 "dma_device_type": 1 00:15:58.077 }, 00:15:58.077 { 00:15:58.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.077 "dma_device_type": 2 00:15:58.077 } 00:15:58.077 ], 00:15:58.077 "driver_specific": {} 00:15:58.077 } 00:15:58.077 ] 00:15:58.077 11:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:58.077 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:58.078 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:58.078 11:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:58.336 [2024-07-25 11:27:14.033732] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.336 [2024-07-25 11:27:14.033966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.336 [2024-07-25 11:27:14.034124] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.336 [2024-07-25 11:27:14.036457] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.336 [2024-07-25 11:27:14.036692] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.336 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.594 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.594 "name": "Existed_Raid", 00:15:58.594 "uuid": "40c3c030-2bad-4c22-a508-834834965d74", 00:15:58.594 "strip_size_kb": 64, 00:15:58.594 "state": "configuring", 00:15:58.594 "raid_level": "raid0", 00:15:58.594 "superblock": true, 00:15:58.594 "num_base_bdevs": 4, 00:15:58.594 "num_base_bdevs_discovered": 3, 00:15:58.594 "num_base_bdevs_operational": 4, 00:15:58.594 "base_bdevs_list": [ 00:15:58.594 { 00:15:58.594 "name": "BaseBdev1", 00:15:58.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.594 "is_configured": false, 00:15:58.594 "data_offset": 0, 00:15:58.594 "data_size": 0 00:15:58.594 }, 00:15:58.594 { 00:15:58.594 "name": "BaseBdev2", 00:15:58.594 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:15:58.594 "is_configured": true, 00:15:58.594 "data_offset": 2048, 00:15:58.594 "data_size": 63488 00:15:58.594 }, 00:15:58.594 { 00:15:58.594 "name": "BaseBdev3", 00:15:58.594 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:15:58.594 "is_configured": true, 00:15:58.594 "data_offset": 2048, 00:15:58.594 "data_size": 63488 00:15:58.594 }, 00:15:58.594 { 00:15:58.594 "name": "BaseBdev4", 00:15:58.594 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:15:58.594 "is_configured": true, 00:15:58.594 "data_offset": 2048, 00:15:58.594 "data_size": 63488 00:15:58.594 } 00:15:58.594 ] 00:15:58.594 }' 00:15:58.594 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.594 11:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.159 11:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:59.417 [2024-07-25 11:27:15.246055] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.417 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.675 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.675 "name": "Existed_Raid", 00:15:59.675 "uuid": "40c3c030-2bad-4c22-a508-834834965d74", 00:15:59.675 "strip_size_kb": 64, 00:15:59.675 "state": "configuring", 00:15:59.675 "raid_level": "raid0", 00:15:59.675 "superblock": true, 00:15:59.675 "num_base_bdevs": 4, 00:15:59.675 "num_base_bdevs_discovered": 2, 00:15:59.675 "num_base_bdevs_operational": 4, 00:15:59.675 "base_bdevs_list": [ 00:15:59.675 { 00:15:59.675 "name": "BaseBdev1", 00:15:59.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.675 "is_configured": false, 00:15:59.675 "data_offset": 0, 00:15:59.675 "data_size": 0 00:15:59.675 }, 00:15:59.675 { 00:15:59.675 "name": null, 00:15:59.675 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:15:59.675 "is_configured": false, 00:15:59.675 "data_offset": 2048, 00:15:59.675 "data_size": 63488 00:15:59.675 }, 00:15:59.675 { 00:15:59.675 "name": "BaseBdev3", 00:15:59.675 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:15:59.675 "is_configured": true, 00:15:59.675 "data_offset": 2048, 00:15:59.675 "data_size": 63488 00:15:59.675 }, 00:15:59.675 { 00:15:59.675 "name": "BaseBdev4", 00:15:59.675 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:15:59.675 "is_configured": true, 00:15:59.675 "data_offset": 2048, 00:15:59.675 "data_size": 63488 00:15:59.676 } 00:15:59.676 ] 00:15:59.676 }' 00:15:59.676 11:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.676 11:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.667 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.667 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:00.667 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:00.667 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.925 [2024-07-25 11:27:16.725728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.925 BaseBdev1 00:16:00.925 11:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:00.925 11:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:00.925 11:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:00.925 11:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:00.925 11:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:00.925 11:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:00.925 11:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.183 11:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:01.442 [ 00:16:01.442 { 00:16:01.442 "name": "BaseBdev1", 00:16:01.442 "aliases": [ 00:16:01.442 "e31759b2-f7a6-4a2b-a29d-24152a613a59" 00:16:01.442 ], 00:16:01.442 "product_name": "Malloc disk", 00:16:01.442 "block_size": 512, 00:16:01.442 "num_blocks": 65536, 00:16:01.442 "uuid": "e31759b2-f7a6-4a2b-a29d-24152a613a59", 00:16:01.442 "assigned_rate_limits": { 00:16:01.442 "rw_ios_per_sec": 0, 00:16:01.442 "rw_mbytes_per_sec": 0, 00:16:01.442 "r_mbytes_per_sec": 0, 00:16:01.442 "w_mbytes_per_sec": 0 00:16:01.442 }, 00:16:01.442 "claimed": true, 00:16:01.442 "claim_type": "exclusive_write", 00:16:01.442 "zoned": false, 00:16:01.442 "supported_io_types": { 00:16:01.442 "read": true, 00:16:01.442 "write": true, 00:16:01.442 "unmap": true, 00:16:01.442 "flush": true, 00:16:01.442 "reset": true, 00:16:01.442 "nvme_admin": false, 00:16:01.442 "nvme_io": false, 00:16:01.442 "nvme_io_md": false, 00:16:01.442 "write_zeroes": true, 00:16:01.442 "zcopy": true, 00:16:01.442 "get_zone_info": false, 00:16:01.442 "zone_management": false, 00:16:01.442 "zone_append": false, 00:16:01.442 "compare": false, 00:16:01.442 "compare_and_write": false, 00:16:01.442 "abort": true, 00:16:01.442 "seek_hole": false, 00:16:01.442 "seek_data": false, 00:16:01.442 "copy": true, 00:16:01.442 "nvme_iov_md": false 00:16:01.442 }, 00:16:01.442 "memory_domains": [ 00:16:01.442 { 00:16:01.442 "dma_device_id": "system", 00:16:01.442 "dma_device_type": 1 00:16:01.442 }, 00:16:01.442 { 00:16:01.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.442 "dma_device_type": 2 00:16:01.442 } 00:16:01.442 ], 00:16:01.442 "driver_specific": {} 00:16:01.442 } 00:16:01.442 ] 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.442 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.701 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.701 "name": "Existed_Raid", 00:16:01.701 "uuid": "40c3c030-2bad-4c22-a508-834834965d74", 00:16:01.701 "strip_size_kb": 64, 00:16:01.701 "state": "configuring", 00:16:01.701 "raid_level": "raid0", 00:16:01.701 "superblock": true, 00:16:01.701 "num_base_bdevs": 4, 00:16:01.701 "num_base_bdevs_discovered": 3, 00:16:01.701 "num_base_bdevs_operational": 4, 00:16:01.701 "base_bdevs_list": [ 00:16:01.701 { 00:16:01.701 "name": "BaseBdev1", 00:16:01.701 "uuid": "e31759b2-f7a6-4a2b-a29d-24152a613a59", 00:16:01.701 "is_configured": true, 00:16:01.701 "data_offset": 2048, 00:16:01.701 "data_size": 63488 00:16:01.701 }, 00:16:01.701 { 00:16:01.701 "name": null, 00:16:01.701 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:16:01.701 "is_configured": false, 00:16:01.701 "data_offset": 2048, 00:16:01.701 "data_size": 63488 00:16:01.701 }, 00:16:01.701 { 00:16:01.701 "name": "BaseBdev3", 00:16:01.701 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:16:01.701 "is_configured": true, 00:16:01.701 "data_offset": 2048, 00:16:01.701 "data_size": 63488 00:16:01.701 }, 00:16:01.701 { 00:16:01.701 "name": "BaseBdev4", 00:16:01.701 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:16:01.701 "is_configured": true, 00:16:01.701 "data_offset": 2048, 00:16:01.701 "data_size": 63488 00:16:01.701 } 00:16:01.701 ] 00:16:01.701 }' 00:16:01.701 11:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.701 11:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.268 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:02.268 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.526 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:02.526 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:02.785 [2024-07-25 11:27:18.642409] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:02.785 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:02.785 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.785 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.785 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:02.785 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:02.785 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:02.785 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.785 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.785 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.785 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.043 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.043 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.324 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.324 "name": "Existed_Raid", 00:16:03.324 "uuid": "40c3c030-2bad-4c22-a508-834834965d74", 00:16:03.324 "strip_size_kb": 64, 00:16:03.324 "state": "configuring", 00:16:03.324 "raid_level": "raid0", 00:16:03.324 "superblock": true, 00:16:03.324 "num_base_bdevs": 4, 00:16:03.324 "num_base_bdevs_discovered": 2, 00:16:03.324 "num_base_bdevs_operational": 4, 00:16:03.324 "base_bdevs_list": [ 00:16:03.324 { 00:16:03.324 "name": "BaseBdev1", 00:16:03.324 "uuid": "e31759b2-f7a6-4a2b-a29d-24152a613a59", 00:16:03.324 "is_configured": true, 00:16:03.324 "data_offset": 2048, 00:16:03.324 "data_size": 63488 00:16:03.324 }, 00:16:03.324 { 00:16:03.324 "name": null, 00:16:03.324 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:16:03.324 "is_configured": false, 00:16:03.324 "data_offset": 2048, 00:16:03.324 "data_size": 63488 00:16:03.324 }, 00:16:03.324 { 00:16:03.324 "name": null, 00:16:03.324 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:16:03.324 "is_configured": false, 00:16:03.324 "data_offset": 2048, 00:16:03.324 "data_size": 63488 00:16:03.324 }, 00:16:03.324 { 00:16:03.324 "name": "BaseBdev4", 00:16:03.324 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:16:03.324 "is_configured": true, 00:16:03.324 "data_offset": 2048, 00:16:03.324 "data_size": 63488 00:16:03.324 } 00:16:03.324 ] 00:16:03.324 }' 00:16:03.324 11:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.324 11:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.890 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.890 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:04.148 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:04.148 11:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:04.406 [2024-07-25 11:27:20.150829] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.406 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.664 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.664 "name": "Existed_Raid", 00:16:04.664 "uuid": "40c3c030-2bad-4c22-a508-834834965d74", 00:16:04.664 "strip_size_kb": 64, 00:16:04.664 "state": "configuring", 00:16:04.664 "raid_level": "raid0", 00:16:04.664 "superblock": true, 00:16:04.664 "num_base_bdevs": 4, 00:16:04.664 "num_base_bdevs_discovered": 3, 00:16:04.664 "num_base_bdevs_operational": 4, 00:16:04.664 "base_bdevs_list": [ 00:16:04.664 { 00:16:04.664 "name": "BaseBdev1", 00:16:04.664 "uuid": "e31759b2-f7a6-4a2b-a29d-24152a613a59", 00:16:04.664 "is_configured": true, 00:16:04.664 "data_offset": 2048, 00:16:04.664 "data_size": 63488 00:16:04.664 }, 00:16:04.664 { 00:16:04.664 "name": null, 00:16:04.664 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:16:04.664 "is_configured": false, 00:16:04.664 "data_offset": 2048, 00:16:04.664 "data_size": 63488 00:16:04.664 }, 00:16:04.664 { 00:16:04.664 "name": "BaseBdev3", 00:16:04.664 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:16:04.664 "is_configured": true, 00:16:04.664 "data_offset": 2048, 00:16:04.664 "data_size": 63488 00:16:04.664 }, 00:16:04.664 { 00:16:04.664 "name": "BaseBdev4", 00:16:04.664 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:16:04.664 "is_configured": true, 00:16:04.664 "data_offset": 2048, 00:16:04.664 "data_size": 63488 00:16:04.664 } 00:16:04.664 ] 00:16:04.664 }' 00:16:04.664 11:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.664 11:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.601 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:05.601 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.601 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:05.601 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:05.860 [2024-07-25 11:27:21.718424] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.118 11:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.377 11:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.377 "name": "Existed_Raid", 00:16:06.377 "uuid": "40c3c030-2bad-4c22-a508-834834965d74", 00:16:06.377 "strip_size_kb": 64, 00:16:06.377 "state": "configuring", 00:16:06.377 "raid_level": "raid0", 00:16:06.377 "superblock": true, 00:16:06.377 "num_base_bdevs": 4, 00:16:06.377 "num_base_bdevs_discovered": 2, 00:16:06.377 "num_base_bdevs_operational": 4, 00:16:06.377 "base_bdevs_list": [ 00:16:06.377 { 00:16:06.377 "name": null, 00:16:06.377 "uuid": "e31759b2-f7a6-4a2b-a29d-24152a613a59", 00:16:06.377 "is_configured": false, 00:16:06.377 "data_offset": 2048, 00:16:06.377 "data_size": 63488 00:16:06.377 }, 00:16:06.377 { 00:16:06.377 "name": null, 00:16:06.377 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:16:06.377 "is_configured": false, 00:16:06.377 "data_offset": 2048, 00:16:06.377 "data_size": 63488 00:16:06.377 }, 00:16:06.377 { 00:16:06.377 "name": "BaseBdev3", 00:16:06.377 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:16:06.377 "is_configured": true, 00:16:06.377 "data_offset": 2048, 00:16:06.377 "data_size": 63488 00:16:06.377 }, 00:16:06.377 { 00:16:06.377 "name": "BaseBdev4", 00:16:06.377 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:16:06.377 "is_configured": true, 00:16:06.377 "data_offset": 2048, 00:16:06.377 "data_size": 63488 00:16:06.377 } 00:16:06.377 ] 00:16:06.377 }' 00:16:06.377 11:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.377 11:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.944 11:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.944 11:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:07.205 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:07.205 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:07.464 [2024-07-25 11:27:23.255924] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.464 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.721 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.721 "name": "Existed_Raid", 00:16:07.721 "uuid": "40c3c030-2bad-4c22-a508-834834965d74", 00:16:07.721 "strip_size_kb": 64, 00:16:07.721 "state": "configuring", 00:16:07.721 "raid_level": "raid0", 00:16:07.721 "superblock": true, 00:16:07.721 "num_base_bdevs": 4, 00:16:07.721 "num_base_bdevs_discovered": 3, 00:16:07.721 "num_base_bdevs_operational": 4, 00:16:07.721 "base_bdevs_list": [ 00:16:07.721 { 00:16:07.721 "name": null, 00:16:07.721 "uuid": "e31759b2-f7a6-4a2b-a29d-24152a613a59", 00:16:07.721 "is_configured": false, 00:16:07.721 "data_offset": 2048, 00:16:07.721 "data_size": 63488 00:16:07.721 }, 00:16:07.721 { 00:16:07.721 "name": "BaseBdev2", 00:16:07.721 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:16:07.721 "is_configured": true, 00:16:07.721 "data_offset": 2048, 00:16:07.721 "data_size": 63488 00:16:07.721 }, 00:16:07.721 { 00:16:07.721 "name": "BaseBdev3", 00:16:07.721 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:16:07.721 "is_configured": true, 00:16:07.721 "data_offset": 2048, 00:16:07.721 "data_size": 63488 00:16:07.721 }, 00:16:07.721 { 00:16:07.721 "name": "BaseBdev4", 00:16:07.721 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:16:07.721 "is_configured": true, 00:16:07.721 "data_offset": 2048, 00:16:07.721 "data_size": 63488 00:16:07.721 } 00:16:07.721 ] 00:16:07.721 }' 00:16:07.721 11:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.721 11:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.657 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.657 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:08.657 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:08.657 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.657 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:08.914 11:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e31759b2-f7a6-4a2b-a29d-24152a613a59 00:16:09.480 [2024-07-25 11:27:25.064585] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:09.480 [2024-07-25 11:27:25.064953] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:09.480 [2024-07-25 11:27:25.064991] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:09.480 [2024-07-25 11:27:25.065329] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:09.480 [2024-07-25 11:27:25.065541] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:09.480 [2024-07-25 11:27:25.065568] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:09.480 [2024-07-25 11:27:25.065764] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.480 NewBaseBdev 00:16:09.480 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:09.480 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:09.480 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:09.480 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:09.480 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:09.480 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:09.480 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:09.480 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:09.738 [ 00:16:09.738 { 00:16:09.738 "name": "NewBaseBdev", 00:16:09.738 "aliases": [ 00:16:09.738 "e31759b2-f7a6-4a2b-a29d-24152a613a59" 00:16:09.738 ], 00:16:09.738 "product_name": "Malloc disk", 00:16:09.738 "block_size": 512, 00:16:09.738 "num_blocks": 65536, 00:16:09.738 "uuid": "e31759b2-f7a6-4a2b-a29d-24152a613a59", 00:16:09.738 "assigned_rate_limits": { 00:16:09.738 "rw_ios_per_sec": 0, 00:16:09.738 "rw_mbytes_per_sec": 0, 00:16:09.738 "r_mbytes_per_sec": 0, 00:16:09.738 "w_mbytes_per_sec": 0 00:16:09.738 }, 00:16:09.738 "claimed": true, 00:16:09.738 "claim_type": "exclusive_write", 00:16:09.738 "zoned": false, 00:16:09.738 "supported_io_types": { 00:16:09.738 "read": true, 00:16:09.738 "write": true, 00:16:09.738 "unmap": true, 00:16:09.738 "flush": true, 00:16:09.738 "reset": true, 00:16:09.738 "nvme_admin": false, 00:16:09.738 "nvme_io": false, 00:16:09.738 "nvme_io_md": false, 00:16:09.738 "write_zeroes": true, 00:16:09.738 "zcopy": true, 00:16:09.738 "get_zone_info": false, 00:16:09.738 "zone_management": false, 00:16:09.738 "zone_append": false, 00:16:09.738 "compare": false, 00:16:09.738 "compare_and_write": false, 00:16:09.738 "abort": true, 00:16:09.738 "seek_hole": false, 00:16:09.738 "seek_data": false, 00:16:09.738 "copy": true, 00:16:09.738 "nvme_iov_md": false 00:16:09.738 }, 00:16:09.738 "memory_domains": [ 00:16:09.738 { 00:16:09.738 "dma_device_id": "system", 00:16:09.738 "dma_device_type": 1 00:16:09.738 }, 00:16:09.738 { 00:16:09.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.738 "dma_device_type": 2 00:16:09.738 } 00:16:09.738 ], 00:16:09.738 "driver_specific": {} 00:16:09.738 } 00:16:09.738 ] 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.738 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.995 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.995 "name": "Existed_Raid", 00:16:09.995 "uuid": "40c3c030-2bad-4c22-a508-834834965d74", 00:16:09.995 "strip_size_kb": 64, 00:16:09.995 "state": "online", 00:16:09.995 "raid_level": "raid0", 00:16:09.995 "superblock": true, 00:16:09.995 "num_base_bdevs": 4, 00:16:09.995 "num_base_bdevs_discovered": 4, 00:16:09.995 "num_base_bdevs_operational": 4, 00:16:09.995 "base_bdevs_list": [ 00:16:09.995 { 00:16:09.995 "name": "NewBaseBdev", 00:16:09.995 "uuid": "e31759b2-f7a6-4a2b-a29d-24152a613a59", 00:16:09.995 "is_configured": true, 00:16:09.995 "data_offset": 2048, 00:16:09.995 "data_size": 63488 00:16:09.995 }, 00:16:09.995 { 00:16:09.995 "name": "BaseBdev2", 00:16:09.995 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:16:09.995 "is_configured": true, 00:16:09.995 "data_offset": 2048, 00:16:09.995 "data_size": 63488 00:16:09.995 }, 00:16:09.995 { 00:16:09.996 "name": "BaseBdev3", 00:16:09.996 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:16:09.996 "is_configured": true, 00:16:09.996 "data_offset": 2048, 00:16:09.996 "data_size": 63488 00:16:09.996 }, 00:16:09.996 { 00:16:09.996 "name": "BaseBdev4", 00:16:09.996 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:16:09.996 "is_configured": true, 00:16:09.996 "data_offset": 2048, 00:16:09.996 "data_size": 63488 00:16:09.996 } 00:16:09.996 ] 00:16:09.996 }' 00:16:09.996 11:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.996 11:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.561 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:10.561 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:10.561 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:10.561 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:10.561 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:10.561 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:10.561 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:10.561 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:11.126 [2024-07-25 11:27:26.706450] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.126 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:11.126 "name": "Existed_Raid", 00:16:11.126 "aliases": [ 00:16:11.126 "40c3c030-2bad-4c22-a508-834834965d74" 00:16:11.126 ], 00:16:11.126 "product_name": "Raid Volume", 00:16:11.126 "block_size": 512, 00:16:11.126 "num_blocks": 253952, 00:16:11.126 "uuid": "40c3c030-2bad-4c22-a508-834834965d74", 00:16:11.126 "assigned_rate_limits": { 00:16:11.126 "rw_ios_per_sec": 0, 00:16:11.126 "rw_mbytes_per_sec": 0, 00:16:11.126 "r_mbytes_per_sec": 0, 00:16:11.126 "w_mbytes_per_sec": 0 00:16:11.126 }, 00:16:11.126 "claimed": false, 00:16:11.126 "zoned": false, 00:16:11.126 "supported_io_types": { 00:16:11.126 "read": true, 00:16:11.126 "write": true, 00:16:11.126 "unmap": true, 00:16:11.126 "flush": true, 00:16:11.126 "reset": true, 00:16:11.126 "nvme_admin": false, 00:16:11.126 "nvme_io": false, 00:16:11.126 "nvme_io_md": false, 00:16:11.126 "write_zeroes": true, 00:16:11.126 "zcopy": false, 00:16:11.126 "get_zone_info": false, 00:16:11.126 "zone_management": false, 00:16:11.126 "zone_append": false, 00:16:11.126 "compare": false, 00:16:11.126 "compare_and_write": false, 00:16:11.126 "abort": false, 00:16:11.126 "seek_hole": false, 00:16:11.126 "seek_data": false, 00:16:11.126 "copy": false, 00:16:11.126 "nvme_iov_md": false 00:16:11.126 }, 00:16:11.126 "memory_domains": [ 00:16:11.126 { 00:16:11.126 "dma_device_id": "system", 00:16:11.126 "dma_device_type": 1 00:16:11.126 }, 00:16:11.126 { 00:16:11.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.126 "dma_device_type": 2 00:16:11.126 }, 00:16:11.126 { 00:16:11.126 "dma_device_id": "system", 00:16:11.126 "dma_device_type": 1 00:16:11.126 }, 00:16:11.126 { 00:16:11.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.126 "dma_device_type": 2 00:16:11.126 }, 00:16:11.126 { 00:16:11.126 "dma_device_id": "system", 00:16:11.126 "dma_device_type": 1 00:16:11.126 }, 00:16:11.126 { 00:16:11.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.126 "dma_device_type": 2 00:16:11.126 }, 00:16:11.126 { 00:16:11.126 "dma_device_id": "system", 00:16:11.126 "dma_device_type": 1 00:16:11.126 }, 00:16:11.126 { 00:16:11.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.126 "dma_device_type": 2 00:16:11.126 } 00:16:11.126 ], 00:16:11.126 "driver_specific": { 00:16:11.126 "raid": { 00:16:11.126 "uuid": "40c3c030-2bad-4c22-a508-834834965d74", 00:16:11.126 "strip_size_kb": 64, 00:16:11.126 "state": "online", 00:16:11.126 "raid_level": "raid0", 00:16:11.126 "superblock": true, 00:16:11.126 "num_base_bdevs": 4, 00:16:11.126 "num_base_bdevs_discovered": 4, 00:16:11.126 "num_base_bdevs_operational": 4, 00:16:11.126 "base_bdevs_list": [ 00:16:11.126 { 00:16:11.126 "name": "NewBaseBdev", 00:16:11.126 "uuid": "e31759b2-f7a6-4a2b-a29d-24152a613a59", 00:16:11.126 "is_configured": true, 00:16:11.126 "data_offset": 2048, 00:16:11.126 "data_size": 63488 00:16:11.126 }, 00:16:11.126 { 00:16:11.126 "name": "BaseBdev2", 00:16:11.126 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:16:11.126 "is_configured": true, 00:16:11.126 "data_offset": 2048, 00:16:11.126 "data_size": 63488 00:16:11.126 }, 00:16:11.126 { 00:16:11.126 "name": "BaseBdev3", 00:16:11.127 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:16:11.127 "is_configured": true, 00:16:11.127 "data_offset": 2048, 00:16:11.127 "data_size": 63488 00:16:11.127 }, 00:16:11.127 { 00:16:11.127 "name": "BaseBdev4", 00:16:11.127 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:16:11.127 "is_configured": true, 00:16:11.127 "data_offset": 2048, 00:16:11.127 "data_size": 63488 00:16:11.127 } 00:16:11.127 ] 00:16:11.127 } 00:16:11.127 } 00:16:11.127 }' 00:16:11.127 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:11.127 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:11.127 BaseBdev2 00:16:11.127 BaseBdev3 00:16:11.127 BaseBdev4' 00:16:11.127 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.127 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:11.127 11:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:11.384 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:11.384 "name": "NewBaseBdev", 00:16:11.384 "aliases": [ 00:16:11.384 "e31759b2-f7a6-4a2b-a29d-24152a613a59" 00:16:11.384 ], 00:16:11.384 "product_name": "Malloc disk", 00:16:11.384 "block_size": 512, 00:16:11.384 "num_blocks": 65536, 00:16:11.384 "uuid": "e31759b2-f7a6-4a2b-a29d-24152a613a59", 00:16:11.384 "assigned_rate_limits": { 00:16:11.384 "rw_ios_per_sec": 0, 00:16:11.384 "rw_mbytes_per_sec": 0, 00:16:11.384 "r_mbytes_per_sec": 0, 00:16:11.384 "w_mbytes_per_sec": 0 00:16:11.384 }, 00:16:11.384 "claimed": true, 00:16:11.384 "claim_type": "exclusive_write", 00:16:11.384 "zoned": false, 00:16:11.384 "supported_io_types": { 00:16:11.384 "read": true, 00:16:11.384 "write": true, 00:16:11.384 "unmap": true, 00:16:11.384 "flush": true, 00:16:11.384 "reset": true, 00:16:11.384 "nvme_admin": false, 00:16:11.384 "nvme_io": false, 00:16:11.384 "nvme_io_md": false, 00:16:11.384 "write_zeroes": true, 00:16:11.384 "zcopy": true, 00:16:11.384 "get_zone_info": false, 00:16:11.384 "zone_management": false, 00:16:11.384 "zone_append": false, 00:16:11.384 "compare": false, 00:16:11.384 "compare_and_write": false, 00:16:11.384 "abort": true, 00:16:11.384 "seek_hole": false, 00:16:11.384 "seek_data": false, 00:16:11.384 "copy": true, 00:16:11.384 "nvme_iov_md": false 00:16:11.384 }, 00:16:11.384 "memory_domains": [ 00:16:11.384 { 00:16:11.384 "dma_device_id": "system", 00:16:11.384 "dma_device_type": 1 00:16:11.384 }, 00:16:11.384 { 00:16:11.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.384 "dma_device_type": 2 00:16:11.384 } 00:16:11.384 ], 00:16:11.384 "driver_specific": {} 00:16:11.384 }' 00:16:11.384 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.384 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.384 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:11.384 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.384 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:11.384 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:11.642 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.642 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:11.642 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:11.642 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.642 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.642 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.642 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.642 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:11.642 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:12.207 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.207 "name": "BaseBdev2", 00:16:12.207 "aliases": [ 00:16:12.207 "a92676d7-39ed-4723-83df-f78faebe1832" 00:16:12.207 ], 00:16:12.207 "product_name": "Malloc disk", 00:16:12.207 "block_size": 512, 00:16:12.207 "num_blocks": 65536, 00:16:12.207 "uuid": "a92676d7-39ed-4723-83df-f78faebe1832", 00:16:12.207 "assigned_rate_limits": { 00:16:12.207 "rw_ios_per_sec": 0, 00:16:12.207 "rw_mbytes_per_sec": 0, 00:16:12.207 "r_mbytes_per_sec": 0, 00:16:12.207 "w_mbytes_per_sec": 0 00:16:12.207 }, 00:16:12.207 "claimed": true, 00:16:12.207 "claim_type": "exclusive_write", 00:16:12.207 "zoned": false, 00:16:12.207 "supported_io_types": { 00:16:12.207 "read": true, 00:16:12.207 "write": true, 00:16:12.207 "unmap": true, 00:16:12.207 "flush": true, 00:16:12.207 "reset": true, 00:16:12.207 "nvme_admin": false, 00:16:12.207 "nvme_io": false, 00:16:12.207 "nvme_io_md": false, 00:16:12.207 "write_zeroes": true, 00:16:12.207 "zcopy": true, 00:16:12.207 "get_zone_info": false, 00:16:12.207 "zone_management": false, 00:16:12.207 "zone_append": false, 00:16:12.207 "compare": false, 00:16:12.207 "compare_and_write": false, 00:16:12.207 "abort": true, 00:16:12.207 "seek_hole": false, 00:16:12.207 "seek_data": false, 00:16:12.207 "copy": true, 00:16:12.207 "nvme_iov_md": false 00:16:12.207 }, 00:16:12.207 "memory_domains": [ 00:16:12.207 { 00:16:12.207 "dma_device_id": "system", 00:16:12.207 "dma_device_type": 1 00:16:12.207 }, 00:16:12.207 { 00:16:12.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.207 "dma_device_type": 2 00:16:12.207 } 00:16:12.207 ], 00:16:12.207 "driver_specific": {} 00:16:12.207 }' 00:16:12.207 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.207 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.207 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.207 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.207 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.207 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.207 11:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.207 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.207 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.207 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.465 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.465 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.465 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.465 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:12.465 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.723 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.723 "name": "BaseBdev3", 00:16:12.723 "aliases": [ 00:16:12.723 "e4f788f9-ce1c-4f1b-9578-4aafb883e635" 00:16:12.723 ], 00:16:12.723 "product_name": "Malloc disk", 00:16:12.723 "block_size": 512, 00:16:12.723 "num_blocks": 65536, 00:16:12.723 "uuid": "e4f788f9-ce1c-4f1b-9578-4aafb883e635", 00:16:12.723 "assigned_rate_limits": { 00:16:12.723 "rw_ios_per_sec": 0, 00:16:12.723 "rw_mbytes_per_sec": 0, 00:16:12.723 "r_mbytes_per_sec": 0, 00:16:12.723 "w_mbytes_per_sec": 0 00:16:12.723 }, 00:16:12.723 "claimed": true, 00:16:12.723 "claim_type": "exclusive_write", 00:16:12.723 "zoned": false, 00:16:12.723 "supported_io_types": { 00:16:12.723 "read": true, 00:16:12.723 "write": true, 00:16:12.723 "unmap": true, 00:16:12.723 "flush": true, 00:16:12.723 "reset": true, 00:16:12.723 "nvme_admin": false, 00:16:12.723 "nvme_io": false, 00:16:12.723 "nvme_io_md": false, 00:16:12.723 "write_zeroes": true, 00:16:12.723 "zcopy": true, 00:16:12.723 "get_zone_info": false, 00:16:12.723 "zone_management": false, 00:16:12.723 "zone_append": false, 00:16:12.723 "compare": false, 00:16:12.723 "compare_and_write": false, 00:16:12.723 "abort": true, 00:16:12.723 "seek_hole": false, 00:16:12.723 "seek_data": false, 00:16:12.723 "copy": true, 00:16:12.723 "nvme_iov_md": false 00:16:12.723 }, 00:16:12.723 "memory_domains": [ 00:16:12.723 { 00:16:12.723 "dma_device_id": "system", 00:16:12.723 "dma_device_type": 1 00:16:12.723 }, 00:16:12.723 { 00:16:12.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.723 "dma_device_type": 2 00:16:12.723 } 00:16:12.723 ], 00:16:12.723 "driver_specific": {} 00:16:12.723 }' 00:16:12.723 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.723 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.723 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.723 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.723 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.981 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.981 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.981 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.981 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.981 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.981 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.981 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.981 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.981 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:12.981 11:27:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:13.238 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:13.238 "name": "BaseBdev4", 00:16:13.238 "aliases": [ 00:16:13.238 "b71ffd39-7807-4acc-95e2-30ce997c84c1" 00:16:13.238 ], 00:16:13.238 "product_name": "Malloc disk", 00:16:13.238 "block_size": 512, 00:16:13.238 "num_blocks": 65536, 00:16:13.238 "uuid": "b71ffd39-7807-4acc-95e2-30ce997c84c1", 00:16:13.238 "assigned_rate_limits": { 00:16:13.238 "rw_ios_per_sec": 0, 00:16:13.238 "rw_mbytes_per_sec": 0, 00:16:13.238 "r_mbytes_per_sec": 0, 00:16:13.238 "w_mbytes_per_sec": 0 00:16:13.238 }, 00:16:13.238 "claimed": true, 00:16:13.238 "claim_type": "exclusive_write", 00:16:13.238 "zoned": false, 00:16:13.238 "supported_io_types": { 00:16:13.238 "read": true, 00:16:13.238 "write": true, 00:16:13.238 "unmap": true, 00:16:13.238 "flush": true, 00:16:13.238 "reset": true, 00:16:13.238 "nvme_admin": false, 00:16:13.238 "nvme_io": false, 00:16:13.238 "nvme_io_md": false, 00:16:13.238 "write_zeroes": true, 00:16:13.238 "zcopy": true, 00:16:13.238 "get_zone_info": false, 00:16:13.238 "zone_management": false, 00:16:13.238 "zone_append": false, 00:16:13.238 "compare": false, 00:16:13.238 "compare_and_write": false, 00:16:13.238 "abort": true, 00:16:13.238 "seek_hole": false, 00:16:13.238 "seek_data": false, 00:16:13.238 "copy": true, 00:16:13.238 "nvme_iov_md": false 00:16:13.238 }, 00:16:13.238 "memory_domains": [ 00:16:13.238 { 00:16:13.238 "dma_device_id": "system", 00:16:13.238 "dma_device_type": 1 00:16:13.238 }, 00:16:13.238 { 00:16:13.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.238 "dma_device_type": 2 00:16:13.238 } 00:16:13.238 ], 00:16:13.238 "driver_specific": {} 00:16:13.238 }' 00:16:13.238 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.496 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.496 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:13.496 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.496 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.496 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:13.496 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.496 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.753 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:13.753 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.753 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.753 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:13.753 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:14.013 [2024-07-25 11:27:29.686893] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.013 [2024-07-25 11:27:29.686935] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.013 [2024-07-25 11:27:29.687044] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.013 [2024-07-25 11:27:29.687128] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.013 [2024-07-25 11:27:29.687152] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 77099 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77099 ']' 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77099 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77099 00:16:14.013 killing process with pid 77099 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77099' 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77099 00:16:14.013 [2024-07-25 11:27:29.731471] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.013 11:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77099 00:16:14.271 [2024-07-25 11:27:30.084504] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.687 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:15.687 00:16:15.687 real 0m37.212s 00:16:15.687 user 1m8.470s 00:16:15.687 sys 0m4.584s 00:16:15.687 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:15.687 ************************************ 00:16:15.687 END TEST raid_state_function_test_sb 00:16:15.687 ************************************ 00:16:15.687 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.687 11:27:31 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:16:15.687 11:27:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:15.687 11:27:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:15.687 11:27:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.687 ************************************ 00:16:15.687 START TEST raid_superblock_test 00:16:15.687 ************************************ 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=78198 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 78198 /var/tmp/spdk-raid.sock 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78198 ']' 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:15.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:15.687 11:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.687 [2024-07-25 11:27:31.441033] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:16:15.687 [2024-07-25 11:27:31.441417] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78198 ] 00:16:15.960 [2024-07-25 11:27:31.613906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.218 [2024-07-25 11:27:31.862534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.218 [2024-07-25 11:27:32.074252] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.218 [2024-07-25 11:27:32.074552] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:16.786 malloc1 00:16:16.786 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:17.352 [2024-07-25 11:27:32.928000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:17.352 [2024-07-25 11:27:32.928358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.352 [2024-07-25 11:27:32.928436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:17.352 [2024-07-25 11:27:32.928760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.352 [2024-07-25 11:27:32.931862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.352 [2024-07-25 11:27:32.932032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:17.352 pt1 00:16:17.352 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:17.352 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:17.352 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:16:17.352 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:16:17.352 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:17.352 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.352 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.352 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.352 11:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:17.352 malloc2 00:16:17.610 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.867 [2024-07-25 11:27:33.537730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.867 [2024-07-25 11:27:33.538052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.867 [2024-07-25 11:27:33.538126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:17.868 [2024-07-25 11:27:33.538378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.868 [2024-07-25 11:27:33.541215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.868 [2024-07-25 11:27:33.541414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.868 pt2 00:16:17.868 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:17.868 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:17.868 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:16:17.868 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:16:17.868 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:17.868 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.868 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.868 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.868 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:18.125 malloc3 00:16:18.125 11:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:18.383 [2024-07-25 11:27:34.041804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:18.383 [2024-07-25 11:27:34.043235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.383 [2024-07-25 11:27:34.043281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:18.383 [2024-07-25 11:27:34.043302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.383 [2024-07-25 11:27:34.046146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.383 [2024-07-25 11:27:34.046211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:18.383 pt3 00:16:18.383 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:18.383 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:18.383 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:16:18.383 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:16:18.383 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:18.383 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:18.383 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:18.383 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:18.383 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:18.641 malloc4 00:16:18.641 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:18.899 [2024-07-25 11:27:34.585310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:18.899 [2024-07-25 11:27:34.585426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.899 [2024-07-25 11:27:34.585457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:18.899 [2024-07-25 11:27:34.585483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.899 [2024-07-25 11:27:34.588403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.899 [2024-07-25 11:27:34.588452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:18.899 pt4 00:16:18.899 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:18.899 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:18.899 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:19.158 [2024-07-25 11:27:34.821463] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:19.158 [2024-07-25 11:27:34.823910] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.158 [2024-07-25 11:27:34.824002] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:19.158 [2024-07-25 11:27:34.824081] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:19.158 [2024-07-25 11:27:34.824357] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:19.158 [2024-07-25 11:27:34.824386] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:19.158 [2024-07-25 11:27:34.824963] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:19.158 [2024-07-25 11:27:34.825343] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:19.158 [2024-07-25 11:27:34.825479] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:19.158 [2024-07-25 11:27:34.826006] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.158 11:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.416 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.416 "name": "raid_bdev1", 00:16:19.416 "uuid": "51ed5590-e721-4041-bf8d-4c5d1ed54304", 00:16:19.416 "strip_size_kb": 64, 00:16:19.416 "state": "online", 00:16:19.416 "raid_level": "raid0", 00:16:19.416 "superblock": true, 00:16:19.416 "num_base_bdevs": 4, 00:16:19.416 "num_base_bdevs_discovered": 4, 00:16:19.416 "num_base_bdevs_operational": 4, 00:16:19.416 "base_bdevs_list": [ 00:16:19.416 { 00:16:19.416 "name": "pt1", 00:16:19.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.416 "is_configured": true, 00:16:19.416 "data_offset": 2048, 00:16:19.416 "data_size": 63488 00:16:19.416 }, 00:16:19.416 { 00:16:19.416 "name": "pt2", 00:16:19.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.416 "is_configured": true, 00:16:19.416 "data_offset": 2048, 00:16:19.416 "data_size": 63488 00:16:19.416 }, 00:16:19.416 { 00:16:19.416 "name": "pt3", 00:16:19.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.416 "is_configured": true, 00:16:19.416 "data_offset": 2048, 00:16:19.416 "data_size": 63488 00:16:19.416 }, 00:16:19.416 { 00:16:19.416 "name": "pt4", 00:16:19.416 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.416 "is_configured": true, 00:16:19.416 "data_offset": 2048, 00:16:19.416 "data_size": 63488 00:16:19.416 } 00:16:19.416 ] 00:16:19.416 }' 00:16:19.416 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.416 11:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.982 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:16:19.982 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:19.982 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:19.982 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:19.982 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:19.982 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:19.982 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:19.982 11:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:20.240 [2024-07-25 11:27:36.090710] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.240 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:20.240 "name": "raid_bdev1", 00:16:20.240 "aliases": [ 00:16:20.240 "51ed5590-e721-4041-bf8d-4c5d1ed54304" 00:16:20.240 ], 00:16:20.240 "product_name": "Raid Volume", 00:16:20.240 "block_size": 512, 00:16:20.240 "num_blocks": 253952, 00:16:20.240 "uuid": "51ed5590-e721-4041-bf8d-4c5d1ed54304", 00:16:20.240 "assigned_rate_limits": { 00:16:20.240 "rw_ios_per_sec": 0, 00:16:20.240 "rw_mbytes_per_sec": 0, 00:16:20.240 "r_mbytes_per_sec": 0, 00:16:20.240 "w_mbytes_per_sec": 0 00:16:20.240 }, 00:16:20.240 "claimed": false, 00:16:20.240 "zoned": false, 00:16:20.240 "supported_io_types": { 00:16:20.240 "read": true, 00:16:20.240 "write": true, 00:16:20.240 "unmap": true, 00:16:20.240 "flush": true, 00:16:20.240 "reset": true, 00:16:20.240 "nvme_admin": false, 00:16:20.240 "nvme_io": false, 00:16:20.240 "nvme_io_md": false, 00:16:20.240 "write_zeroes": true, 00:16:20.240 "zcopy": false, 00:16:20.240 "get_zone_info": false, 00:16:20.240 "zone_management": false, 00:16:20.240 "zone_append": false, 00:16:20.240 "compare": false, 00:16:20.240 "compare_and_write": false, 00:16:20.240 "abort": false, 00:16:20.240 "seek_hole": false, 00:16:20.240 "seek_data": false, 00:16:20.240 "copy": false, 00:16:20.240 "nvme_iov_md": false 00:16:20.240 }, 00:16:20.240 "memory_domains": [ 00:16:20.240 { 00:16:20.240 "dma_device_id": "system", 00:16:20.240 "dma_device_type": 1 00:16:20.240 }, 00:16:20.240 { 00:16:20.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.240 "dma_device_type": 2 00:16:20.240 }, 00:16:20.240 { 00:16:20.240 "dma_device_id": "system", 00:16:20.240 "dma_device_type": 1 00:16:20.240 }, 00:16:20.240 { 00:16:20.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.240 "dma_device_type": 2 00:16:20.240 }, 00:16:20.240 { 00:16:20.240 "dma_device_id": "system", 00:16:20.240 "dma_device_type": 1 00:16:20.240 }, 00:16:20.240 { 00:16:20.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.240 "dma_device_type": 2 00:16:20.240 }, 00:16:20.240 { 00:16:20.240 "dma_device_id": "system", 00:16:20.240 "dma_device_type": 1 00:16:20.240 }, 00:16:20.240 { 00:16:20.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.240 "dma_device_type": 2 00:16:20.240 } 00:16:20.240 ], 00:16:20.240 "driver_specific": { 00:16:20.240 "raid": { 00:16:20.240 "uuid": "51ed5590-e721-4041-bf8d-4c5d1ed54304", 00:16:20.240 "strip_size_kb": 64, 00:16:20.240 "state": "online", 00:16:20.240 "raid_level": "raid0", 00:16:20.240 "superblock": true, 00:16:20.240 "num_base_bdevs": 4, 00:16:20.240 "num_base_bdevs_discovered": 4, 00:16:20.240 "num_base_bdevs_operational": 4, 00:16:20.240 "base_bdevs_list": [ 00:16:20.240 { 00:16:20.240 "name": "pt1", 00:16:20.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.240 "is_configured": true, 00:16:20.240 "data_offset": 2048, 00:16:20.240 "data_size": 63488 00:16:20.240 }, 00:16:20.240 { 00:16:20.240 "name": "pt2", 00:16:20.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.240 "is_configured": true, 00:16:20.240 "data_offset": 2048, 00:16:20.240 "data_size": 63488 00:16:20.240 }, 00:16:20.240 { 00:16:20.240 "name": "pt3", 00:16:20.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.240 "is_configured": true, 00:16:20.240 "data_offset": 2048, 00:16:20.240 "data_size": 63488 00:16:20.240 }, 00:16:20.240 { 00:16:20.240 "name": "pt4", 00:16:20.240 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.240 "is_configured": true, 00:16:20.240 "data_offset": 2048, 00:16:20.240 "data_size": 63488 00:16:20.240 } 00:16:20.240 ] 00:16:20.240 } 00:16:20.240 } 00:16:20.240 }' 00:16:20.240 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.498 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:20.498 pt2 00:16:20.498 pt3 00:16:20.498 pt4' 00:16:20.498 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:20.498 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:20.498 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:20.756 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:20.756 "name": "pt1", 00:16:20.756 "aliases": [ 00:16:20.756 "00000000-0000-0000-0000-000000000001" 00:16:20.756 ], 00:16:20.756 "product_name": "passthru", 00:16:20.756 "block_size": 512, 00:16:20.756 "num_blocks": 65536, 00:16:20.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.756 "assigned_rate_limits": { 00:16:20.756 "rw_ios_per_sec": 0, 00:16:20.756 "rw_mbytes_per_sec": 0, 00:16:20.756 "r_mbytes_per_sec": 0, 00:16:20.756 "w_mbytes_per_sec": 0 00:16:20.756 }, 00:16:20.756 "claimed": true, 00:16:20.756 "claim_type": "exclusive_write", 00:16:20.756 "zoned": false, 00:16:20.756 "supported_io_types": { 00:16:20.756 "read": true, 00:16:20.756 "write": true, 00:16:20.756 "unmap": true, 00:16:20.756 "flush": true, 00:16:20.756 "reset": true, 00:16:20.756 "nvme_admin": false, 00:16:20.756 "nvme_io": false, 00:16:20.756 "nvme_io_md": false, 00:16:20.756 "write_zeroes": true, 00:16:20.756 "zcopy": true, 00:16:20.756 "get_zone_info": false, 00:16:20.756 "zone_management": false, 00:16:20.756 "zone_append": false, 00:16:20.756 "compare": false, 00:16:20.756 "compare_and_write": false, 00:16:20.756 "abort": true, 00:16:20.756 "seek_hole": false, 00:16:20.756 "seek_data": false, 00:16:20.756 "copy": true, 00:16:20.756 "nvme_iov_md": false 00:16:20.756 }, 00:16:20.756 "memory_domains": [ 00:16:20.756 { 00:16:20.756 "dma_device_id": "system", 00:16:20.756 "dma_device_type": 1 00:16:20.756 }, 00:16:20.756 { 00:16:20.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.756 "dma_device_type": 2 00:16:20.756 } 00:16:20.756 ], 00:16:20.756 "driver_specific": { 00:16:20.756 "passthru": { 00:16:20.756 "name": "pt1", 00:16:20.756 "base_bdev_name": "malloc1" 00:16:20.756 } 00:16:20.756 } 00:16:20.756 }' 00:16:20.756 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:20.756 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:20.756 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:20.756 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:20.756 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:20.756 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:20.756 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.014 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.014 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:21.014 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.014 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.014 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:21.014 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:21.014 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:21.014 11:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:21.273 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:21.273 "name": "pt2", 00:16:21.273 "aliases": [ 00:16:21.273 "00000000-0000-0000-0000-000000000002" 00:16:21.273 ], 00:16:21.273 "product_name": "passthru", 00:16:21.273 "block_size": 512, 00:16:21.273 "num_blocks": 65536, 00:16:21.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.273 "assigned_rate_limits": { 00:16:21.273 "rw_ios_per_sec": 0, 00:16:21.273 "rw_mbytes_per_sec": 0, 00:16:21.273 "r_mbytes_per_sec": 0, 00:16:21.273 "w_mbytes_per_sec": 0 00:16:21.273 }, 00:16:21.273 "claimed": true, 00:16:21.273 "claim_type": "exclusive_write", 00:16:21.273 "zoned": false, 00:16:21.273 "supported_io_types": { 00:16:21.273 "read": true, 00:16:21.273 "write": true, 00:16:21.273 "unmap": true, 00:16:21.273 "flush": true, 00:16:21.273 "reset": true, 00:16:21.273 "nvme_admin": false, 00:16:21.273 "nvme_io": false, 00:16:21.273 "nvme_io_md": false, 00:16:21.273 "write_zeroes": true, 00:16:21.273 "zcopy": true, 00:16:21.273 "get_zone_info": false, 00:16:21.273 "zone_management": false, 00:16:21.273 "zone_append": false, 00:16:21.273 "compare": false, 00:16:21.273 "compare_and_write": false, 00:16:21.273 "abort": true, 00:16:21.273 "seek_hole": false, 00:16:21.273 "seek_data": false, 00:16:21.273 "copy": true, 00:16:21.273 "nvme_iov_md": false 00:16:21.273 }, 00:16:21.273 "memory_domains": [ 00:16:21.273 { 00:16:21.273 "dma_device_id": "system", 00:16:21.273 "dma_device_type": 1 00:16:21.273 }, 00:16:21.273 { 00:16:21.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.273 "dma_device_type": 2 00:16:21.273 } 00:16:21.273 ], 00:16:21.273 "driver_specific": { 00:16:21.273 "passthru": { 00:16:21.273 "name": "pt2", 00:16:21.273 "base_bdev_name": "malloc2" 00:16:21.273 } 00:16:21.273 } 00:16:21.273 }' 00:16:21.273 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.531 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:21.531 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:21.531 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.531 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:21.531 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:21.531 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.531 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:21.531 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:21.531 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.789 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:21.789 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:21.789 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:21.789 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:21.789 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:22.048 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:22.048 "name": "pt3", 00:16:22.048 "aliases": [ 00:16:22.048 "00000000-0000-0000-0000-000000000003" 00:16:22.048 ], 00:16:22.048 "product_name": "passthru", 00:16:22.048 "block_size": 512, 00:16:22.048 "num_blocks": 65536, 00:16:22.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.048 "assigned_rate_limits": { 00:16:22.048 "rw_ios_per_sec": 0, 00:16:22.048 "rw_mbytes_per_sec": 0, 00:16:22.048 "r_mbytes_per_sec": 0, 00:16:22.048 "w_mbytes_per_sec": 0 00:16:22.048 }, 00:16:22.048 "claimed": true, 00:16:22.048 "claim_type": "exclusive_write", 00:16:22.048 "zoned": false, 00:16:22.048 "supported_io_types": { 00:16:22.048 "read": true, 00:16:22.048 "write": true, 00:16:22.048 "unmap": true, 00:16:22.048 "flush": true, 00:16:22.048 "reset": true, 00:16:22.048 "nvme_admin": false, 00:16:22.048 "nvme_io": false, 00:16:22.048 "nvme_io_md": false, 00:16:22.048 "write_zeroes": true, 00:16:22.048 "zcopy": true, 00:16:22.048 "get_zone_info": false, 00:16:22.048 "zone_management": false, 00:16:22.048 "zone_append": false, 00:16:22.048 "compare": false, 00:16:22.048 "compare_and_write": false, 00:16:22.048 "abort": true, 00:16:22.048 "seek_hole": false, 00:16:22.048 "seek_data": false, 00:16:22.048 "copy": true, 00:16:22.048 "nvme_iov_md": false 00:16:22.048 }, 00:16:22.048 "memory_domains": [ 00:16:22.048 { 00:16:22.048 "dma_device_id": "system", 00:16:22.048 "dma_device_type": 1 00:16:22.048 }, 00:16:22.048 { 00:16:22.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.048 "dma_device_type": 2 00:16:22.048 } 00:16:22.048 ], 00:16:22.048 "driver_specific": { 00:16:22.048 "passthru": { 00:16:22.048 "name": "pt3", 00:16:22.048 "base_bdev_name": "malloc3" 00:16:22.048 } 00:16:22.048 } 00:16:22.048 }' 00:16:22.048 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.048 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.048 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:22.048 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.048 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.306 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:22.306 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.307 11:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.307 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:22.307 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:22.307 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:22.307 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:22.307 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:22.307 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:22.307 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:22.565 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:22.565 "name": "pt4", 00:16:22.565 "aliases": [ 00:16:22.565 "00000000-0000-0000-0000-000000000004" 00:16:22.565 ], 00:16:22.565 "product_name": "passthru", 00:16:22.565 "block_size": 512, 00:16:22.565 "num_blocks": 65536, 00:16:22.565 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.565 "assigned_rate_limits": { 00:16:22.565 "rw_ios_per_sec": 0, 00:16:22.565 "rw_mbytes_per_sec": 0, 00:16:22.565 "r_mbytes_per_sec": 0, 00:16:22.565 "w_mbytes_per_sec": 0 00:16:22.565 }, 00:16:22.565 "claimed": true, 00:16:22.565 "claim_type": "exclusive_write", 00:16:22.565 "zoned": false, 00:16:22.565 "supported_io_types": { 00:16:22.565 "read": true, 00:16:22.565 "write": true, 00:16:22.565 "unmap": true, 00:16:22.565 "flush": true, 00:16:22.565 "reset": true, 00:16:22.565 "nvme_admin": false, 00:16:22.565 "nvme_io": false, 00:16:22.565 "nvme_io_md": false, 00:16:22.565 "write_zeroes": true, 00:16:22.565 "zcopy": true, 00:16:22.565 "get_zone_info": false, 00:16:22.565 "zone_management": false, 00:16:22.565 "zone_append": false, 00:16:22.565 "compare": false, 00:16:22.565 "compare_and_write": false, 00:16:22.565 "abort": true, 00:16:22.565 "seek_hole": false, 00:16:22.565 "seek_data": false, 00:16:22.565 "copy": true, 00:16:22.565 "nvme_iov_md": false 00:16:22.565 }, 00:16:22.565 "memory_domains": [ 00:16:22.565 { 00:16:22.565 "dma_device_id": "system", 00:16:22.565 "dma_device_type": 1 00:16:22.565 }, 00:16:22.565 { 00:16:22.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.565 "dma_device_type": 2 00:16:22.565 } 00:16:22.565 ], 00:16:22.565 "driver_specific": { 00:16:22.565 "passthru": { 00:16:22.565 "name": "pt4", 00:16:22.565 "base_bdev_name": "malloc4" 00:16:22.565 } 00:16:22.565 } 00:16:22.565 }' 00:16:22.565 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.565 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.822 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:22.822 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.822 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.822 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:22.822 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.822 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.822 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:22.822 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.080 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.080 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.080 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:23.080 11:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:16:23.338 [2024-07-25 11:27:39.031496] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.339 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=51ed5590-e721-4041-bf8d-4c5d1ed54304 00:16:23.339 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 51ed5590-e721-4041-bf8d-4c5d1ed54304 ']' 00:16:23.339 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:23.597 [2024-07-25 11:27:39.267142] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.597 [2024-07-25 11:27:39.267198] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.597 [2024-07-25 11:27:39.267298] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.597 [2024-07-25 11:27:39.267393] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.597 [2024-07-25 11:27:39.267410] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:23.597 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.597 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:16:23.855 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:16:23.856 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:16:23.856 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:23.856 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:24.114 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.114 11:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:24.373 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.373 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:24.631 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.631 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:24.889 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:24.890 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:25.147 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:16:25.147 11:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:25.147 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:25.147 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:25.147 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.147 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.147 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.147 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.147 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.147 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:25.148 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.148 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:25.148 11:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:25.406 [2024-07-25 11:27:41.151571] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:25.406 [2024-07-25 11:27:41.154041] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:25.406 [2024-07-25 11:27:41.154117] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:25.406 [2024-07-25 11:27:41.154172] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:25.406 [2024-07-25 11:27:41.154279] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:25.406 [2024-07-25 11:27:41.154367] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:25.406 [2024-07-25 11:27:41.154404] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:25.406 [2024-07-25 11:27:41.154434] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:25.406 [2024-07-25 11:27:41.154460] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.406 [2024-07-25 11:27:41.154474] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:25.406 request: 00:16:25.406 { 00:16:25.406 "name": "raid_bdev1", 00:16:25.406 "raid_level": "raid0", 00:16:25.406 "base_bdevs": [ 00:16:25.406 "malloc1", 00:16:25.406 "malloc2", 00:16:25.406 "malloc3", 00:16:25.406 "malloc4" 00:16:25.406 ], 00:16:25.406 "strip_size_kb": 64, 00:16:25.406 "superblock": false, 00:16:25.407 "method": "bdev_raid_create", 00:16:25.407 "req_id": 1 00:16:25.407 } 00:16:25.407 Got JSON-RPC error response 00:16:25.407 response: 00:16:25.407 { 00:16:25.407 "code": -17, 00:16:25.407 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:25.407 } 00:16:25.407 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:25.407 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:25.407 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:25.407 11:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:25.407 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.407 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:16:25.665 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:16:25.665 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:16:25.665 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:25.924 [2024-07-25 11:27:41.715860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:25.925 [2024-07-25 11:27:41.716031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.925 [2024-07-25 11:27:41.716105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:25.925 [2024-07-25 11:27:41.716144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.925 [2024-07-25 11:27:41.721191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.925 [2024-07-25 11:27:41.721288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:25.925 [2024-07-25 11:27:41.721544] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:25.925 [2024-07-25 11:27:41.721734] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:25.925 pt1 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.925 11:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.492 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.492 "name": "raid_bdev1", 00:16:26.492 "uuid": "51ed5590-e721-4041-bf8d-4c5d1ed54304", 00:16:26.492 "strip_size_kb": 64, 00:16:26.492 "state": "configuring", 00:16:26.492 "raid_level": "raid0", 00:16:26.492 "superblock": true, 00:16:26.492 "num_base_bdevs": 4, 00:16:26.492 "num_base_bdevs_discovered": 1, 00:16:26.492 "num_base_bdevs_operational": 4, 00:16:26.492 "base_bdevs_list": [ 00:16:26.492 { 00:16:26.492 "name": "pt1", 00:16:26.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.492 "is_configured": true, 00:16:26.492 "data_offset": 2048, 00:16:26.492 "data_size": 63488 00:16:26.492 }, 00:16:26.492 { 00:16:26.492 "name": null, 00:16:26.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.492 "is_configured": false, 00:16:26.492 "data_offset": 2048, 00:16:26.492 "data_size": 63488 00:16:26.492 }, 00:16:26.492 { 00:16:26.492 "name": null, 00:16:26.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.492 "is_configured": false, 00:16:26.492 "data_offset": 2048, 00:16:26.492 "data_size": 63488 00:16:26.492 }, 00:16:26.492 { 00:16:26.492 "name": null, 00:16:26.492 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.492 "is_configured": false, 00:16:26.492 "data_offset": 2048, 00:16:26.492 "data_size": 63488 00:16:26.492 } 00:16:26.492 ] 00:16:26.492 }' 00:16:26.492 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.492 11:27:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.057 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:16:27.057 11:27:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:27.316 [2024-07-25 11:27:42.985996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:27.316 [2024-07-25 11:27:42.986101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.316 [2024-07-25 11:27:42.986142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:27.316 [2024-07-25 11:27:42.986159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.316 [2024-07-25 11:27:42.986809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.316 [2024-07-25 11:27:42.986836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:27.316 [2024-07-25 11:27:42.986946] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:27.316 [2024-07-25 11:27:42.986983] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.316 pt2 00:16:27.316 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:27.579 [2024-07-25 11:27:43.302148] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.579 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.838 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.838 "name": "raid_bdev1", 00:16:27.838 "uuid": "51ed5590-e721-4041-bf8d-4c5d1ed54304", 00:16:27.838 "strip_size_kb": 64, 00:16:27.838 "state": "configuring", 00:16:27.838 "raid_level": "raid0", 00:16:27.838 "superblock": true, 00:16:27.838 "num_base_bdevs": 4, 00:16:27.838 "num_base_bdevs_discovered": 1, 00:16:27.838 "num_base_bdevs_operational": 4, 00:16:27.838 "base_bdevs_list": [ 00:16:27.838 { 00:16:27.838 "name": "pt1", 00:16:27.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.838 "is_configured": true, 00:16:27.838 "data_offset": 2048, 00:16:27.838 "data_size": 63488 00:16:27.838 }, 00:16:27.838 { 00:16:27.838 "name": null, 00:16:27.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.838 "is_configured": false, 00:16:27.838 "data_offset": 2048, 00:16:27.838 "data_size": 63488 00:16:27.838 }, 00:16:27.838 { 00:16:27.838 "name": null, 00:16:27.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.838 "is_configured": false, 00:16:27.838 "data_offset": 2048, 00:16:27.838 "data_size": 63488 00:16:27.838 }, 00:16:27.838 { 00:16:27.838 "name": null, 00:16:27.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:27.838 "is_configured": false, 00:16:27.838 "data_offset": 2048, 00:16:27.838 "data_size": 63488 00:16:27.838 } 00:16:27.838 ] 00:16:27.838 }' 00:16:27.838 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.838 11:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.404 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:16:28.404 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:28.404 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.690 [2024-07-25 11:27:44.466444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.690 [2024-07-25 11:27:44.466532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.690 [2024-07-25 11:27:44.466564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:28.690 [2024-07-25 11:27:44.466583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.690 [2024-07-25 11:27:44.467152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.690 [2024-07-25 11:27:44.467194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.690 [2024-07-25 11:27:44.467306] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:28.690 [2024-07-25 11:27:44.467348] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.690 pt2 00:16:28.690 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:28.690 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:28.690 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:28.948 [2024-07-25 11:27:44.701877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:28.948 [2024-07-25 11:27:44.701969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.948 [2024-07-25 11:27:44.702009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:28.948 [2024-07-25 11:27:44.702033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.948 [2024-07-25 11:27:44.702597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.948 [2024-07-25 11:27:44.702651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:28.948 [2024-07-25 11:27:44.702761] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:28.948 [2024-07-25 11:27:44.702801] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:28.948 pt3 00:16:28.948 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:28.948 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:28.948 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:29.206 [2024-07-25 11:27:44.941917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:29.206 [2024-07-25 11:27:44.942009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.206 [2024-07-25 11:27:44.942040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:29.206 [2024-07-25 11:27:44.942060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.206 [2024-07-25 11:27:44.942616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.206 [2024-07-25 11:27:44.942672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:29.206 [2024-07-25 11:27:44.942775] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:29.206 [2024-07-25 11:27:44.942825] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:29.206 [2024-07-25 11:27:44.943024] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:29.206 [2024-07-25 11:27:44.943046] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:29.206 [2024-07-25 11:27:44.943348] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:29.206 [2024-07-25 11:27:44.943567] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:29.206 [2024-07-25 11:27:44.943584] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:29.206 [2024-07-25 11:27:44.943762] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.206 pt4 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.206 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.463 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.463 "name": "raid_bdev1", 00:16:29.463 "uuid": "51ed5590-e721-4041-bf8d-4c5d1ed54304", 00:16:29.463 "strip_size_kb": 64, 00:16:29.463 "state": "online", 00:16:29.463 "raid_level": "raid0", 00:16:29.463 "superblock": true, 00:16:29.463 "num_base_bdevs": 4, 00:16:29.463 "num_base_bdevs_discovered": 4, 00:16:29.463 "num_base_bdevs_operational": 4, 00:16:29.463 "base_bdevs_list": [ 00:16:29.463 { 00:16:29.463 "name": "pt1", 00:16:29.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.463 "is_configured": true, 00:16:29.463 "data_offset": 2048, 00:16:29.463 "data_size": 63488 00:16:29.463 }, 00:16:29.463 { 00:16:29.463 "name": "pt2", 00:16:29.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.463 "is_configured": true, 00:16:29.463 "data_offset": 2048, 00:16:29.463 "data_size": 63488 00:16:29.463 }, 00:16:29.463 { 00:16:29.463 "name": "pt3", 00:16:29.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.463 "is_configured": true, 00:16:29.463 "data_offset": 2048, 00:16:29.463 "data_size": 63488 00:16:29.463 }, 00:16:29.463 { 00:16:29.463 "name": "pt4", 00:16:29.463 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.463 "is_configured": true, 00:16:29.463 "data_offset": 2048, 00:16:29.463 "data_size": 63488 00:16:29.463 } 00:16:29.463 ] 00:16:29.463 }' 00:16:29.463 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.463 11:27:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:16:30.029 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:30.029 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:30.029 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:30.029 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:30.029 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:30.029 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:30.029 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:30.287 [2024-07-25 11:27:46.118667] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.287 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:30.287 "name": "raid_bdev1", 00:16:30.287 "aliases": [ 00:16:30.287 "51ed5590-e721-4041-bf8d-4c5d1ed54304" 00:16:30.287 ], 00:16:30.287 "product_name": "Raid Volume", 00:16:30.287 "block_size": 512, 00:16:30.287 "num_blocks": 253952, 00:16:30.287 "uuid": "51ed5590-e721-4041-bf8d-4c5d1ed54304", 00:16:30.287 "assigned_rate_limits": { 00:16:30.287 "rw_ios_per_sec": 0, 00:16:30.287 "rw_mbytes_per_sec": 0, 00:16:30.287 "r_mbytes_per_sec": 0, 00:16:30.287 "w_mbytes_per_sec": 0 00:16:30.287 }, 00:16:30.287 "claimed": false, 00:16:30.287 "zoned": false, 00:16:30.287 "supported_io_types": { 00:16:30.287 "read": true, 00:16:30.287 "write": true, 00:16:30.287 "unmap": true, 00:16:30.287 "flush": true, 00:16:30.287 "reset": true, 00:16:30.287 "nvme_admin": false, 00:16:30.287 "nvme_io": false, 00:16:30.287 "nvme_io_md": false, 00:16:30.287 "write_zeroes": true, 00:16:30.287 "zcopy": false, 00:16:30.287 "get_zone_info": false, 00:16:30.287 "zone_management": false, 00:16:30.287 "zone_append": false, 00:16:30.287 "compare": false, 00:16:30.287 "compare_and_write": false, 00:16:30.287 "abort": false, 00:16:30.287 "seek_hole": false, 00:16:30.287 "seek_data": false, 00:16:30.287 "copy": false, 00:16:30.287 "nvme_iov_md": false 00:16:30.287 }, 00:16:30.287 "memory_domains": [ 00:16:30.287 { 00:16:30.287 "dma_device_id": "system", 00:16:30.287 "dma_device_type": 1 00:16:30.287 }, 00:16:30.287 { 00:16:30.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.287 "dma_device_type": 2 00:16:30.287 }, 00:16:30.288 { 00:16:30.288 "dma_device_id": "system", 00:16:30.288 "dma_device_type": 1 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.288 "dma_device_type": 2 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "dma_device_id": "system", 00:16:30.288 "dma_device_type": 1 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.288 "dma_device_type": 2 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "dma_device_id": "system", 00:16:30.288 "dma_device_type": 1 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.288 "dma_device_type": 2 00:16:30.288 } 00:16:30.288 ], 00:16:30.288 "driver_specific": { 00:16:30.288 "raid": { 00:16:30.288 "uuid": "51ed5590-e721-4041-bf8d-4c5d1ed54304", 00:16:30.288 "strip_size_kb": 64, 00:16:30.288 "state": "online", 00:16:30.288 "raid_level": "raid0", 00:16:30.288 "superblock": true, 00:16:30.288 "num_base_bdevs": 4, 00:16:30.288 "num_base_bdevs_discovered": 4, 00:16:30.288 "num_base_bdevs_operational": 4, 00:16:30.288 "base_bdevs_list": [ 00:16:30.288 { 00:16:30.288 "name": "pt1", 00:16:30.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.288 "is_configured": true, 00:16:30.288 "data_offset": 2048, 00:16:30.288 "data_size": 63488 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "name": "pt2", 00:16:30.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.288 "is_configured": true, 00:16:30.288 "data_offset": 2048, 00:16:30.288 "data_size": 63488 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "name": "pt3", 00:16:30.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.288 "is_configured": true, 00:16:30.288 "data_offset": 2048, 00:16:30.288 "data_size": 63488 00:16:30.288 }, 00:16:30.288 { 00:16:30.288 "name": "pt4", 00:16:30.288 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.288 "is_configured": true, 00:16:30.288 "data_offset": 2048, 00:16:30.288 "data_size": 63488 00:16:30.288 } 00:16:30.288 ] 00:16:30.288 } 00:16:30.288 } 00:16:30.288 }' 00:16:30.288 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.555 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:30.555 pt2 00:16:30.555 pt3 00:16:30.555 pt4' 00:16:30.555 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:30.555 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:30.555 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:30.813 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:30.813 "name": "pt1", 00:16:30.813 "aliases": [ 00:16:30.813 "00000000-0000-0000-0000-000000000001" 00:16:30.813 ], 00:16:30.813 "product_name": "passthru", 00:16:30.813 "block_size": 512, 00:16:30.813 "num_blocks": 65536, 00:16:30.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.813 "assigned_rate_limits": { 00:16:30.813 "rw_ios_per_sec": 0, 00:16:30.813 "rw_mbytes_per_sec": 0, 00:16:30.813 "r_mbytes_per_sec": 0, 00:16:30.813 "w_mbytes_per_sec": 0 00:16:30.813 }, 00:16:30.813 "claimed": true, 00:16:30.813 "claim_type": "exclusive_write", 00:16:30.813 "zoned": false, 00:16:30.813 "supported_io_types": { 00:16:30.813 "read": true, 00:16:30.813 "write": true, 00:16:30.813 "unmap": true, 00:16:30.813 "flush": true, 00:16:30.813 "reset": true, 00:16:30.813 "nvme_admin": false, 00:16:30.813 "nvme_io": false, 00:16:30.813 "nvme_io_md": false, 00:16:30.813 "write_zeroes": true, 00:16:30.813 "zcopy": true, 00:16:30.813 "get_zone_info": false, 00:16:30.813 "zone_management": false, 00:16:30.813 "zone_append": false, 00:16:30.813 "compare": false, 00:16:30.813 "compare_and_write": false, 00:16:30.813 "abort": true, 00:16:30.813 "seek_hole": false, 00:16:30.813 "seek_data": false, 00:16:30.813 "copy": true, 00:16:30.813 "nvme_iov_md": false 00:16:30.813 }, 00:16:30.813 "memory_domains": [ 00:16:30.813 { 00:16:30.813 "dma_device_id": "system", 00:16:30.813 "dma_device_type": 1 00:16:30.813 }, 00:16:30.813 { 00:16:30.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.813 "dma_device_type": 2 00:16:30.813 } 00:16:30.813 ], 00:16:30.814 "driver_specific": { 00:16:30.814 "passthru": { 00:16:30.814 "name": "pt1", 00:16:30.814 "base_bdev_name": "malloc1" 00:16:30.814 } 00:16:30.814 } 00:16:30.814 }' 00:16:30.814 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.814 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:30.814 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:30.814 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.814 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:30.814 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:30.814 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.072 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.072 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.072 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.072 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.072 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.072 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.072 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.072 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:31.330 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.330 "name": "pt2", 00:16:31.330 "aliases": [ 00:16:31.330 "00000000-0000-0000-0000-000000000002" 00:16:31.330 ], 00:16:31.330 "product_name": "passthru", 00:16:31.330 "block_size": 512, 00:16:31.330 "num_blocks": 65536, 00:16:31.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.330 "assigned_rate_limits": { 00:16:31.330 "rw_ios_per_sec": 0, 00:16:31.330 "rw_mbytes_per_sec": 0, 00:16:31.330 "r_mbytes_per_sec": 0, 00:16:31.330 "w_mbytes_per_sec": 0 00:16:31.330 }, 00:16:31.330 "claimed": true, 00:16:31.330 "claim_type": "exclusive_write", 00:16:31.330 "zoned": false, 00:16:31.330 "supported_io_types": { 00:16:31.330 "read": true, 00:16:31.330 "write": true, 00:16:31.330 "unmap": true, 00:16:31.330 "flush": true, 00:16:31.330 "reset": true, 00:16:31.330 "nvme_admin": false, 00:16:31.330 "nvme_io": false, 00:16:31.330 "nvme_io_md": false, 00:16:31.330 "write_zeroes": true, 00:16:31.330 "zcopy": true, 00:16:31.330 "get_zone_info": false, 00:16:31.330 "zone_management": false, 00:16:31.330 "zone_append": false, 00:16:31.330 "compare": false, 00:16:31.330 "compare_and_write": false, 00:16:31.330 "abort": true, 00:16:31.330 "seek_hole": false, 00:16:31.330 "seek_data": false, 00:16:31.330 "copy": true, 00:16:31.330 "nvme_iov_md": false 00:16:31.330 }, 00:16:31.330 "memory_domains": [ 00:16:31.330 { 00:16:31.330 "dma_device_id": "system", 00:16:31.330 "dma_device_type": 1 00:16:31.330 }, 00:16:31.330 { 00:16:31.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.330 "dma_device_type": 2 00:16:31.330 } 00:16:31.330 ], 00:16:31.330 "driver_specific": { 00:16:31.330 "passthru": { 00:16:31.330 "name": "pt2", 00:16:31.330 "base_bdev_name": "malloc2" 00:16:31.330 } 00:16:31.330 } 00:16:31.330 }' 00:16:31.330 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.330 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.588 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.588 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.588 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.588 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.588 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.588 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.588 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.588 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.846 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.846 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.846 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.846 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:31.846 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.104 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.104 "name": "pt3", 00:16:32.104 "aliases": [ 00:16:32.104 "00000000-0000-0000-0000-000000000003" 00:16:32.104 ], 00:16:32.104 "product_name": "passthru", 00:16:32.104 "block_size": 512, 00:16:32.104 "num_blocks": 65536, 00:16:32.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.104 "assigned_rate_limits": { 00:16:32.104 "rw_ios_per_sec": 0, 00:16:32.104 "rw_mbytes_per_sec": 0, 00:16:32.104 "r_mbytes_per_sec": 0, 00:16:32.104 "w_mbytes_per_sec": 0 00:16:32.104 }, 00:16:32.104 "claimed": true, 00:16:32.104 "claim_type": "exclusive_write", 00:16:32.104 "zoned": false, 00:16:32.104 "supported_io_types": { 00:16:32.104 "read": true, 00:16:32.104 "write": true, 00:16:32.104 "unmap": true, 00:16:32.104 "flush": true, 00:16:32.104 "reset": true, 00:16:32.104 "nvme_admin": false, 00:16:32.104 "nvme_io": false, 00:16:32.104 "nvme_io_md": false, 00:16:32.104 "write_zeroes": true, 00:16:32.104 "zcopy": true, 00:16:32.104 "get_zone_info": false, 00:16:32.104 "zone_management": false, 00:16:32.104 "zone_append": false, 00:16:32.104 "compare": false, 00:16:32.104 "compare_and_write": false, 00:16:32.104 "abort": true, 00:16:32.104 "seek_hole": false, 00:16:32.104 "seek_data": false, 00:16:32.104 "copy": true, 00:16:32.104 "nvme_iov_md": false 00:16:32.104 }, 00:16:32.104 "memory_domains": [ 00:16:32.104 { 00:16:32.104 "dma_device_id": "system", 00:16:32.104 "dma_device_type": 1 00:16:32.104 }, 00:16:32.104 { 00:16:32.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.104 "dma_device_type": 2 00:16:32.104 } 00:16:32.104 ], 00:16:32.104 "driver_specific": { 00:16:32.104 "passthru": { 00:16:32.104 "name": "pt3", 00:16:32.104 "base_bdev_name": "malloc3" 00:16:32.104 } 00:16:32.104 } 00:16:32.104 }' 00:16:32.104 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.104 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.104 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.104 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.104 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.104 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.104 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.362 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.362 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.362 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.362 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.362 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.362 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.362 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.362 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:32.620 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.620 "name": "pt4", 00:16:32.620 "aliases": [ 00:16:32.620 "00000000-0000-0000-0000-000000000004" 00:16:32.620 ], 00:16:32.620 "product_name": "passthru", 00:16:32.620 "block_size": 512, 00:16:32.620 "num_blocks": 65536, 00:16:32.620 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.620 "assigned_rate_limits": { 00:16:32.620 "rw_ios_per_sec": 0, 00:16:32.620 "rw_mbytes_per_sec": 0, 00:16:32.620 "r_mbytes_per_sec": 0, 00:16:32.620 "w_mbytes_per_sec": 0 00:16:32.620 }, 00:16:32.620 "claimed": true, 00:16:32.620 "claim_type": "exclusive_write", 00:16:32.620 "zoned": false, 00:16:32.620 "supported_io_types": { 00:16:32.621 "read": true, 00:16:32.621 "write": true, 00:16:32.621 "unmap": true, 00:16:32.621 "flush": true, 00:16:32.621 "reset": true, 00:16:32.621 "nvme_admin": false, 00:16:32.621 "nvme_io": false, 00:16:32.621 "nvme_io_md": false, 00:16:32.621 "write_zeroes": true, 00:16:32.621 "zcopy": true, 00:16:32.621 "get_zone_info": false, 00:16:32.621 "zone_management": false, 00:16:32.621 "zone_append": false, 00:16:32.621 "compare": false, 00:16:32.621 "compare_and_write": false, 00:16:32.621 "abort": true, 00:16:32.621 "seek_hole": false, 00:16:32.621 "seek_data": false, 00:16:32.621 "copy": true, 00:16:32.621 "nvme_iov_md": false 00:16:32.621 }, 00:16:32.621 "memory_domains": [ 00:16:32.621 { 00:16:32.621 "dma_device_id": "system", 00:16:32.621 "dma_device_type": 1 00:16:32.621 }, 00:16:32.621 { 00:16:32.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.621 "dma_device_type": 2 00:16:32.621 } 00:16:32.621 ], 00:16:32.621 "driver_specific": { 00:16:32.621 "passthru": { 00:16:32.621 "name": "pt4", 00:16:32.621 "base_bdev_name": "malloc4" 00:16:32.621 } 00:16:32.621 } 00:16:32.621 }' 00:16:32.621 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.621 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.879 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.879 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.879 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.879 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.879 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.879 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.879 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.879 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.137 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.137 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:33.137 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:33.137 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:16:33.396 [2024-07-25 11:27:49.127408] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 51ed5590-e721-4041-bf8d-4c5d1ed54304 '!=' 51ed5590-e721-4041-bf8d-4c5d1ed54304 ']' 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 78198 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78198 ']' 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78198 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78198 00:16:33.396 killing process with pid 78198 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78198' 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 78198 00:16:33.396 [2024-07-25 11:27:49.178901] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.396 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 78198 00:16:33.396 [2024-07-25 11:27:49.179030] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.396 [2024-07-25 11:27:49.179128] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.396 [2024-07-25 11:27:49.179152] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:33.655 [2024-07-25 11:27:49.536235] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.031 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:16:35.031 00:16:35.031 real 0m19.369s 00:16:35.031 user 0m34.549s 00:16:35.031 sys 0m2.447s 00:16:35.031 11:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.031 11:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.031 ************************************ 00:16:35.031 END TEST raid_superblock_test 00:16:35.031 ************************************ 00:16:35.031 11:27:50 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:16:35.031 11:27:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:35.031 11:27:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.031 11:27:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.031 ************************************ 00:16:35.031 START TEST raid_read_error_test 00:16:35.031 ************************************ 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.cSSmAj1aJg 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=78743 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 78743 /var/tmp/spdk-raid.sock 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78743 ']' 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.031 11:27:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.031 [2024-07-25 11:27:50.888472] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:16:35.031 [2024-07-25 11:27:50.888959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78743 ] 00:16:35.289 [2024-07-25 11:27:51.063145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.548 [2024-07-25 11:27:51.373158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.832 [2024-07-25 11:27:51.578755] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.832 [2024-07-25 11:27:51.578809] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.090 11:27:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.090 11:27:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:36.090 11:27:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:36.090 11:27:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:36.348 BaseBdev1_malloc 00:16:36.349 11:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:36.606 true 00:16:36.606 11:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:36.863 [2024-07-25 11:27:52.706813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:36.863 [2024-07-25 11:27:52.706893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.863 [2024-07-25 11:27:52.706929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:36.863 [2024-07-25 11:27:52.706944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.863 [2024-07-25 11:27:52.709739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.863 [2024-07-25 11:27:52.709774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:36.863 BaseBdev1 00:16:36.863 11:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:36.863 11:27:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:37.121 BaseBdev2_malloc 00:16:37.379 11:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:37.638 true 00:16:37.638 11:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:37.638 [2024-07-25 11:27:53.486019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:37.638 [2024-07-25 11:27:53.486305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.638 [2024-07-25 11:27:53.486391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:37.638 [2024-07-25 11:27:53.486521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.638 [2024-07-25 11:27:53.489335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.638 [2024-07-25 11:27:53.489506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:37.638 BaseBdev2 00:16:37.638 11:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:37.638 11:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:37.896 BaseBdev3_malloc 00:16:37.896 11:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:38.154 true 00:16:38.154 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:38.411 [2024-07-25 11:27:54.241733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:38.411 [2024-07-25 11:27:54.242025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.411 [2024-07-25 11:27:54.242108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:38.411 [2024-07-25 11:27:54.242361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.411 [2024-07-25 11:27:54.245216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.411 [2024-07-25 11:27:54.245387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:38.411 BaseBdev3 00:16:38.411 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:38.411 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:38.977 BaseBdev4_malloc 00:16:38.977 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:38.977 true 00:16:38.977 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:39.241 [2024-07-25 11:27:55.049411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:39.241 [2024-07-25 11:27:55.049493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.241 [2024-07-25 11:27:55.049542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:39.241 [2024-07-25 11:27:55.049559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.241 [2024-07-25 11:27:55.052333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.241 [2024-07-25 11:27:55.052377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:39.241 BaseBdev4 00:16:39.241 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:39.500 [2024-07-25 11:27:55.277545] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.500 [2024-07-25 11:27:55.280086] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.500 [2024-07-25 11:27:55.280335] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.500 [2024-07-25 11:27:55.280437] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:39.500 [2024-07-25 11:27:55.280785] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:39.500 [2024-07-25 11:27:55.280805] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:39.500 [2024-07-25 11:27:55.281171] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:39.500 [2024-07-25 11:27:55.281382] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:39.500 [2024-07-25 11:27:55.281403] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:39.500 [2024-07-25 11:27:55.281675] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.500 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.759 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.759 "name": "raid_bdev1", 00:16:39.759 "uuid": "96119b6f-75d2-4a64-93d1-23ebd28211de", 00:16:39.759 "strip_size_kb": 64, 00:16:39.759 "state": "online", 00:16:39.759 "raid_level": "raid0", 00:16:39.759 "superblock": true, 00:16:39.759 "num_base_bdevs": 4, 00:16:39.759 "num_base_bdevs_discovered": 4, 00:16:39.759 "num_base_bdevs_operational": 4, 00:16:39.759 "base_bdevs_list": [ 00:16:39.759 { 00:16:39.759 "name": "BaseBdev1", 00:16:39.759 "uuid": "2022722e-f623-511c-934b-97efa7876a80", 00:16:39.759 "is_configured": true, 00:16:39.759 "data_offset": 2048, 00:16:39.759 "data_size": 63488 00:16:39.759 }, 00:16:39.759 { 00:16:39.759 "name": "BaseBdev2", 00:16:39.759 "uuid": "42c154a4-2715-5cd0-b4c7-8100a60795ce", 00:16:39.759 "is_configured": true, 00:16:39.759 "data_offset": 2048, 00:16:39.759 "data_size": 63488 00:16:39.759 }, 00:16:39.759 { 00:16:39.759 "name": "BaseBdev3", 00:16:39.759 "uuid": "bf5fd570-9616-5eda-8aee-ade34862b339", 00:16:39.759 "is_configured": true, 00:16:39.759 "data_offset": 2048, 00:16:39.759 "data_size": 63488 00:16:39.759 }, 00:16:39.759 { 00:16:39.759 "name": "BaseBdev4", 00:16:39.759 "uuid": "39724cfe-f7de-5b81-92af-779a321909fa", 00:16:39.759 "is_configured": true, 00:16:39.759 "data_offset": 2048, 00:16:39.759 "data_size": 63488 00:16:39.759 } 00:16:39.759 ] 00:16:39.759 }' 00:16:39.759 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.759 11:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.692 11:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:40.692 11:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:40.692 [2024-07-25 11:27:56.387276] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:41.622 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.878 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.176 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.176 "name": "raid_bdev1", 00:16:42.176 "uuid": "96119b6f-75d2-4a64-93d1-23ebd28211de", 00:16:42.176 "strip_size_kb": 64, 00:16:42.176 "state": "online", 00:16:42.176 "raid_level": "raid0", 00:16:42.176 "superblock": true, 00:16:42.176 "num_base_bdevs": 4, 00:16:42.176 "num_base_bdevs_discovered": 4, 00:16:42.176 "num_base_bdevs_operational": 4, 00:16:42.176 "base_bdevs_list": [ 00:16:42.176 { 00:16:42.176 "name": "BaseBdev1", 00:16:42.176 "uuid": "2022722e-f623-511c-934b-97efa7876a80", 00:16:42.176 "is_configured": true, 00:16:42.176 "data_offset": 2048, 00:16:42.176 "data_size": 63488 00:16:42.176 }, 00:16:42.176 { 00:16:42.176 "name": "BaseBdev2", 00:16:42.176 "uuid": "42c154a4-2715-5cd0-b4c7-8100a60795ce", 00:16:42.176 "is_configured": true, 00:16:42.176 "data_offset": 2048, 00:16:42.176 "data_size": 63488 00:16:42.176 }, 00:16:42.176 { 00:16:42.176 "name": "BaseBdev3", 00:16:42.176 "uuid": "bf5fd570-9616-5eda-8aee-ade34862b339", 00:16:42.176 "is_configured": true, 00:16:42.176 "data_offset": 2048, 00:16:42.176 "data_size": 63488 00:16:42.176 }, 00:16:42.176 { 00:16:42.176 "name": "BaseBdev4", 00:16:42.176 "uuid": "39724cfe-f7de-5b81-92af-779a321909fa", 00:16:42.176 "is_configured": true, 00:16:42.176 "data_offset": 2048, 00:16:42.176 "data_size": 63488 00:16:42.176 } 00:16:42.176 ] 00:16:42.176 }' 00:16:42.176 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.176 11:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.741 11:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:42.999 [2024-07-25 11:27:58.728539] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.999 [2024-07-25 11:27:58.728768] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.999 [2024-07-25 11:27:58.732041] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.999 [2024-07-25 11:27:58.732224] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.999 [2024-07-25 11:27:58.732404] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.999 [2024-07-25 11:27:58.732589] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:42.999 0 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 78743 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78743 ']' 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78743 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78743 00:16:42.999 killing process with pid 78743 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78743' 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78743 00:16:42.999 [2024-07-25 11:27:58.777724] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.999 11:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78743 00:16:43.259 [2024-07-25 11:27:59.063567] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.631 11:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.cSSmAj1aJg 00:16:44.631 11:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:44.631 11:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:44.631 11:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.43 00:16:44.631 11:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:16:44.631 11:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:44.631 11:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:44.631 11:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.43 != \0\.\0\0 ]] 00:16:44.631 00:16:44.631 real 0m9.526s 00:16:44.631 user 0m14.720s 00:16:44.631 sys 0m1.169s 00:16:44.631 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:44.631 ************************************ 00:16:44.631 END TEST raid_read_error_test 00:16:44.631 ************************************ 00:16:44.631 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.631 11:28:00 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:16:44.631 11:28:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:44.631 11:28:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.631 11:28:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.631 ************************************ 00:16:44.631 START TEST raid_write_error_test 00:16:44.631 ************************************ 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.BNJFu19F5G 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=78955 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 78955 /var/tmp/spdk-raid.sock 00:16:44.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78955 ']' 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:44.631 11:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.631 [2024-07-25 11:28:00.442252] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:16:44.631 [2024-07-25 11:28:00.442605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78955 ] 00:16:44.889 [2024-07-25 11:28:00.609542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.148 [2024-07-25 11:28:00.842369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.406 [2024-07-25 11:28:01.044256] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.406 [2024-07-25 11:28:01.044499] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.664 11:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.664 11:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:45.664 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:45.664 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.921 BaseBdev1_malloc 00:16:45.921 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:46.183 true 00:16:46.183 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:46.440 [2024-07-25 11:28:02.125413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:46.440 [2024-07-25 11:28:02.125544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.440 [2024-07-25 11:28:02.125605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:46.440 [2024-07-25 11:28:02.125667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.440 [2024-07-25 11:28:02.128516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.440 [2024-07-25 11:28:02.128576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.440 BaseBdev1 00:16:46.440 11:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:46.440 11:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:46.699 BaseBdev2_malloc 00:16:46.699 11:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:46.956 true 00:16:46.956 11:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:47.213 [2024-07-25 11:28:02.862254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:47.213 [2024-07-25 11:28:02.862342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.213 [2024-07-25 11:28:02.862382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:47.213 [2024-07-25 11:28:02.862397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.213 [2024-07-25 11:28:02.865200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.213 [2024-07-25 11:28:02.865246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:47.213 BaseBdev2 00:16:47.213 11:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:47.213 11:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:47.471 BaseBdev3_malloc 00:16:47.471 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:47.729 true 00:16:47.729 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:47.985 [2024-07-25 11:28:03.642213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:47.985 [2024-07-25 11:28:03.642294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.985 [2024-07-25 11:28:03.642333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:47.985 [2024-07-25 11:28:03.642349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.985 [2024-07-25 11:28:03.645178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.985 [2024-07-25 11:28:03.645223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:47.985 BaseBdev3 00:16:47.985 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:47.985 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:48.243 BaseBdev4_malloc 00:16:48.243 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:48.501 true 00:16:48.501 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:48.759 [2024-07-25 11:28:04.413950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:48.759 [2024-07-25 11:28:04.414032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.759 [2024-07-25 11:28:04.414069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:48.759 [2024-07-25 11:28:04.414084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.759 [2024-07-25 11:28:04.416865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.759 [2024-07-25 11:28:04.416911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:48.759 BaseBdev4 00:16:48.759 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:49.017 [2024-07-25 11:28:04.642087] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.017 [2024-07-25 11:28:04.644438] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.017 [2024-07-25 11:28:04.644570] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.017 [2024-07-25 11:28:04.644689] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:49.017 [2024-07-25 11:28:04.645004] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:49.017 [2024-07-25 11:28:04.645029] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:49.017 [2024-07-25 11:28:04.645413] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:49.017 [2024-07-25 11:28:04.645660] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:49.017 [2024-07-25 11:28:04.645682] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:49.017 [2024-07-25 11:28:04.645934] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.017 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.018 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.276 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.276 "name": "raid_bdev1", 00:16:49.276 "uuid": "0e389589-6e65-4784-a600-0a5104c0234a", 00:16:49.276 "strip_size_kb": 64, 00:16:49.276 "state": "online", 00:16:49.276 "raid_level": "raid0", 00:16:49.276 "superblock": true, 00:16:49.276 "num_base_bdevs": 4, 00:16:49.276 "num_base_bdevs_discovered": 4, 00:16:49.276 "num_base_bdevs_operational": 4, 00:16:49.276 "base_bdevs_list": [ 00:16:49.276 { 00:16:49.276 "name": "BaseBdev1", 00:16:49.276 "uuid": "239c2786-8fee-5d4b-8d3f-c3afc9dcd742", 00:16:49.276 "is_configured": true, 00:16:49.276 "data_offset": 2048, 00:16:49.276 "data_size": 63488 00:16:49.276 }, 00:16:49.276 { 00:16:49.276 "name": "BaseBdev2", 00:16:49.276 "uuid": "8e8c1897-4cd5-5b55-8a36-5eb460423691", 00:16:49.276 "is_configured": true, 00:16:49.276 "data_offset": 2048, 00:16:49.276 "data_size": 63488 00:16:49.276 }, 00:16:49.276 { 00:16:49.276 "name": "BaseBdev3", 00:16:49.276 "uuid": "fc6e605a-c88c-5139-b555-88d32ca7f645", 00:16:49.276 "is_configured": true, 00:16:49.276 "data_offset": 2048, 00:16:49.276 "data_size": 63488 00:16:49.276 }, 00:16:49.276 { 00:16:49.276 "name": "BaseBdev4", 00:16:49.276 "uuid": "408659aa-9add-5baa-9dbf-229bb397c438", 00:16:49.276 "is_configured": true, 00:16:49.276 "data_offset": 2048, 00:16:49.276 "data_size": 63488 00:16:49.276 } 00:16:49.276 ] 00:16:49.276 }' 00:16:49.276 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.276 11:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.842 11:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:49.842 11:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:49.842 [2024-07-25 11:28:05.659773] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:50.793 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.051 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.052 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.310 11:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.310 "name": "raid_bdev1", 00:16:51.310 "uuid": "0e389589-6e65-4784-a600-0a5104c0234a", 00:16:51.310 "strip_size_kb": 64, 00:16:51.310 "state": "online", 00:16:51.310 "raid_level": "raid0", 00:16:51.310 "superblock": true, 00:16:51.310 "num_base_bdevs": 4, 00:16:51.310 "num_base_bdevs_discovered": 4, 00:16:51.310 "num_base_bdevs_operational": 4, 00:16:51.310 "base_bdevs_list": [ 00:16:51.310 { 00:16:51.310 "name": "BaseBdev1", 00:16:51.310 "uuid": "239c2786-8fee-5d4b-8d3f-c3afc9dcd742", 00:16:51.310 "is_configured": true, 00:16:51.310 "data_offset": 2048, 00:16:51.310 "data_size": 63488 00:16:51.310 }, 00:16:51.310 { 00:16:51.310 "name": "BaseBdev2", 00:16:51.310 "uuid": "8e8c1897-4cd5-5b55-8a36-5eb460423691", 00:16:51.310 "is_configured": true, 00:16:51.310 "data_offset": 2048, 00:16:51.310 "data_size": 63488 00:16:51.310 }, 00:16:51.310 { 00:16:51.310 "name": "BaseBdev3", 00:16:51.310 "uuid": "fc6e605a-c88c-5139-b555-88d32ca7f645", 00:16:51.310 "is_configured": true, 00:16:51.310 "data_offset": 2048, 00:16:51.310 "data_size": 63488 00:16:51.310 }, 00:16:51.310 { 00:16:51.310 "name": "BaseBdev4", 00:16:51.310 "uuid": "408659aa-9add-5baa-9dbf-229bb397c438", 00:16:51.310 "is_configured": true, 00:16:51.310 "data_offset": 2048, 00:16:51.310 "data_size": 63488 00:16:51.310 } 00:16:51.310 ] 00:16:51.310 }' 00:16:51.310 11:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.310 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.877 11:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:52.135 [2024-07-25 11:28:07.939776] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.135 [2024-07-25 11:28:07.940023] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.135 [2024-07-25 11:28:07.943196] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.135 [2024-07-25 11:28:07.943266] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.135 [2024-07-25 11:28:07.943333] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.135 [2024-07-25 11:28:07.943347] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:52.135 0 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 78955 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78955 ']' 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78955 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78955 00:16:52.135 killing process with pid 78955 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78955' 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78955 00:16:52.135 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78955 00:16:52.135 [2024-07-25 11:28:07.986834] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.393 [2024-07-25 11:28:08.274349] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.767 11:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.BNJFu19F5G 00:16:53.767 11:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:53.767 11:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:53.767 ************************************ 00:16:53.767 END TEST raid_write_error_test 00:16:53.767 ************************************ 00:16:53.767 11:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.44 00:16:53.767 11:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:16:53.767 11:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:53.767 11:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:53.767 11:28:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.44 != \0\.\0\0 ]] 00:16:53.767 00:16:53.767 real 0m9.160s 00:16:53.767 user 0m13.999s 00:16:53.767 sys 0m1.166s 00:16:53.767 11:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:53.767 11:28:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.767 11:28:09 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:16:53.767 11:28:09 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:53.767 11:28:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:53.767 11:28:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:53.767 11:28:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.767 ************************************ 00:16:53.767 START TEST raid_state_function_test 00:16:53.767 ************************************ 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:53.767 Process raid pid: 79160 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=79160 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 79160' 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 79160 /var/tmp/spdk-raid.sock 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 79160 ']' 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:53.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.767 11:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.031 [2024-07-25 11:28:09.656520] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:16:54.031 [2024-07-25 11:28:09.656758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.031 [2024-07-25 11:28:09.835404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.328 [2024-07-25 11:28:10.115525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.586 [2024-07-25 11:28:10.332697] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.586 [2024-07-25 11:28:10.332740] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.844 11:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:54.844 11:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:54.844 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:55.102 [2024-07-25 11:28:10.815271] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.102 [2024-07-25 11:28:10.815543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.102 [2024-07-25 11:28:10.815728] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.102 [2024-07-25 11:28:10.815896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.102 [2024-07-25 11:28:10.816041] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.102 [2024-07-25 11:28:10.816073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.102 [2024-07-25 11:28:10.816088] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:55.102 [2024-07-25 11:28:10.816101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.102 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.359 11:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.359 "name": "Existed_Raid", 00:16:55.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.359 "strip_size_kb": 64, 00:16:55.359 "state": "configuring", 00:16:55.359 "raid_level": "concat", 00:16:55.359 "superblock": false, 00:16:55.359 "num_base_bdevs": 4, 00:16:55.359 "num_base_bdevs_discovered": 0, 00:16:55.359 "num_base_bdevs_operational": 4, 00:16:55.359 "base_bdevs_list": [ 00:16:55.359 { 00:16:55.359 "name": "BaseBdev1", 00:16:55.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.359 "is_configured": false, 00:16:55.359 "data_offset": 0, 00:16:55.359 "data_size": 0 00:16:55.359 }, 00:16:55.359 { 00:16:55.359 "name": "BaseBdev2", 00:16:55.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.359 "is_configured": false, 00:16:55.359 "data_offset": 0, 00:16:55.359 "data_size": 0 00:16:55.359 }, 00:16:55.359 { 00:16:55.360 "name": "BaseBdev3", 00:16:55.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.360 "is_configured": false, 00:16:55.360 "data_offset": 0, 00:16:55.360 "data_size": 0 00:16:55.360 }, 00:16:55.360 { 00:16:55.360 "name": "BaseBdev4", 00:16:55.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.360 "is_configured": false, 00:16:55.360 "data_offset": 0, 00:16:55.360 "data_size": 0 00:16:55.360 } 00:16:55.360 ] 00:16:55.360 }' 00:16:55.360 11:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.360 11:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.925 11:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:56.183 [2024-07-25 11:28:11.935436] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.183 [2024-07-25 11:28:11.935485] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:56.183 11:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:56.442 [2024-07-25 11:28:12.175496] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.442 [2024-07-25 11:28:12.175560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.442 [2024-07-25 11:28:12.175580] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.442 [2024-07-25 11:28:12.175595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.442 [2024-07-25 11:28:12.175607] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.442 [2024-07-25 11:28:12.175641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.442 [2024-07-25 11:28:12.175657] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:56.442 [2024-07-25 11:28:12.175671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:56.442 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.699 [2024-07-25 11:28:12.464509] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.699 BaseBdev1 00:16:56.699 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:56.699 11:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:56.699 11:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:56.699 11:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:56.699 11:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:56.699 11:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:56.699 11:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.956 11:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:57.250 [ 00:16:57.250 { 00:16:57.250 "name": "BaseBdev1", 00:16:57.250 "aliases": [ 00:16:57.250 "567238bd-a115-40d7-ba19-938216b804e2" 00:16:57.250 ], 00:16:57.250 "product_name": "Malloc disk", 00:16:57.250 "block_size": 512, 00:16:57.250 "num_blocks": 65536, 00:16:57.250 "uuid": "567238bd-a115-40d7-ba19-938216b804e2", 00:16:57.250 "assigned_rate_limits": { 00:16:57.250 "rw_ios_per_sec": 0, 00:16:57.250 "rw_mbytes_per_sec": 0, 00:16:57.250 "r_mbytes_per_sec": 0, 00:16:57.250 "w_mbytes_per_sec": 0 00:16:57.250 }, 00:16:57.250 "claimed": true, 00:16:57.250 "claim_type": "exclusive_write", 00:16:57.250 "zoned": false, 00:16:57.250 "supported_io_types": { 00:16:57.250 "read": true, 00:16:57.250 "write": true, 00:16:57.250 "unmap": true, 00:16:57.250 "flush": true, 00:16:57.250 "reset": true, 00:16:57.250 "nvme_admin": false, 00:16:57.250 "nvme_io": false, 00:16:57.250 "nvme_io_md": false, 00:16:57.250 "write_zeroes": true, 00:16:57.250 "zcopy": true, 00:16:57.250 "get_zone_info": false, 00:16:57.250 "zone_management": false, 00:16:57.250 "zone_append": false, 00:16:57.250 "compare": false, 00:16:57.250 "compare_and_write": false, 00:16:57.250 "abort": true, 00:16:57.250 "seek_hole": false, 00:16:57.250 "seek_data": false, 00:16:57.250 "copy": true, 00:16:57.250 "nvme_iov_md": false 00:16:57.250 }, 00:16:57.250 "memory_domains": [ 00:16:57.250 { 00:16:57.250 "dma_device_id": "system", 00:16:57.250 "dma_device_type": 1 00:16:57.250 }, 00:16:57.250 { 00:16:57.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.250 "dma_device_type": 2 00:16:57.250 } 00:16:57.250 ], 00:16:57.250 "driver_specific": {} 00:16:57.250 } 00:16:57.250 ] 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.250 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.251 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.251 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.509 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.509 "name": "Existed_Raid", 00:16:57.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.509 "strip_size_kb": 64, 00:16:57.509 "state": "configuring", 00:16:57.509 "raid_level": "concat", 00:16:57.509 "superblock": false, 00:16:57.509 "num_base_bdevs": 4, 00:16:57.509 "num_base_bdevs_discovered": 1, 00:16:57.509 "num_base_bdevs_operational": 4, 00:16:57.509 "base_bdevs_list": [ 00:16:57.509 { 00:16:57.509 "name": "BaseBdev1", 00:16:57.509 "uuid": "567238bd-a115-40d7-ba19-938216b804e2", 00:16:57.509 "is_configured": true, 00:16:57.509 "data_offset": 0, 00:16:57.509 "data_size": 65536 00:16:57.509 }, 00:16:57.509 { 00:16:57.509 "name": "BaseBdev2", 00:16:57.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.509 "is_configured": false, 00:16:57.509 "data_offset": 0, 00:16:57.509 "data_size": 0 00:16:57.509 }, 00:16:57.509 { 00:16:57.509 "name": "BaseBdev3", 00:16:57.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.509 "is_configured": false, 00:16:57.509 "data_offset": 0, 00:16:57.509 "data_size": 0 00:16:57.509 }, 00:16:57.509 { 00:16:57.509 "name": "BaseBdev4", 00:16:57.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.509 "is_configured": false, 00:16:57.509 "data_offset": 0, 00:16:57.509 "data_size": 0 00:16:57.509 } 00:16:57.509 ] 00:16:57.509 }' 00:16:57.509 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.509 11:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.075 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:58.332 [2024-07-25 11:28:14.037099] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.332 [2024-07-25 11:28:14.037171] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:58.332 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:58.591 [2024-07-25 11:28:14.317234] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.591 [2024-07-25 11:28:14.319575] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.591 [2024-07-25 11:28:14.319637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.591 [2024-07-25 11:28:14.319659] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:58.591 [2024-07-25 11:28:14.319674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:58.591 [2024-07-25 11:28:14.319689] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:58.591 [2024-07-25 11:28:14.319702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.591 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.849 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.849 "name": "Existed_Raid", 00:16:58.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.849 "strip_size_kb": 64, 00:16:58.849 "state": "configuring", 00:16:58.849 "raid_level": "concat", 00:16:58.849 "superblock": false, 00:16:58.849 "num_base_bdevs": 4, 00:16:58.849 "num_base_bdevs_discovered": 1, 00:16:58.849 "num_base_bdevs_operational": 4, 00:16:58.849 "base_bdevs_list": [ 00:16:58.849 { 00:16:58.849 "name": "BaseBdev1", 00:16:58.849 "uuid": "567238bd-a115-40d7-ba19-938216b804e2", 00:16:58.849 "is_configured": true, 00:16:58.849 "data_offset": 0, 00:16:58.849 "data_size": 65536 00:16:58.849 }, 00:16:58.849 { 00:16:58.849 "name": "BaseBdev2", 00:16:58.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.849 "is_configured": false, 00:16:58.849 "data_offset": 0, 00:16:58.849 "data_size": 0 00:16:58.849 }, 00:16:58.849 { 00:16:58.849 "name": "BaseBdev3", 00:16:58.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.849 "is_configured": false, 00:16:58.849 "data_offset": 0, 00:16:58.849 "data_size": 0 00:16:58.849 }, 00:16:58.849 { 00:16:58.849 "name": "BaseBdev4", 00:16:58.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.849 "is_configured": false, 00:16:58.849 "data_offset": 0, 00:16:58.849 "data_size": 0 00:16:58.849 } 00:16:58.849 ] 00:16:58.849 }' 00:16:58.849 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.849 11:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.417 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.674 [2024-07-25 11:28:15.507050] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.674 BaseBdev2 00:16:59.674 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:59.674 11:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:59.674 11:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:59.674 11:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:59.674 11:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:59.674 11:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:59.674 11:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.241 11:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:00.241 [ 00:17:00.241 { 00:17:00.241 "name": "BaseBdev2", 00:17:00.241 "aliases": [ 00:17:00.241 "1139d0be-aacf-49ac-9b4a-f418a6e98863" 00:17:00.241 ], 00:17:00.241 "product_name": "Malloc disk", 00:17:00.241 "block_size": 512, 00:17:00.241 "num_blocks": 65536, 00:17:00.241 "uuid": "1139d0be-aacf-49ac-9b4a-f418a6e98863", 00:17:00.241 "assigned_rate_limits": { 00:17:00.241 "rw_ios_per_sec": 0, 00:17:00.241 "rw_mbytes_per_sec": 0, 00:17:00.241 "r_mbytes_per_sec": 0, 00:17:00.241 "w_mbytes_per_sec": 0 00:17:00.241 }, 00:17:00.241 "claimed": true, 00:17:00.241 "claim_type": "exclusive_write", 00:17:00.241 "zoned": false, 00:17:00.241 "supported_io_types": { 00:17:00.241 "read": true, 00:17:00.241 "write": true, 00:17:00.241 "unmap": true, 00:17:00.241 "flush": true, 00:17:00.241 "reset": true, 00:17:00.241 "nvme_admin": false, 00:17:00.241 "nvme_io": false, 00:17:00.241 "nvme_io_md": false, 00:17:00.241 "write_zeroes": true, 00:17:00.241 "zcopy": true, 00:17:00.241 "get_zone_info": false, 00:17:00.241 "zone_management": false, 00:17:00.241 "zone_append": false, 00:17:00.241 "compare": false, 00:17:00.241 "compare_and_write": false, 00:17:00.241 "abort": true, 00:17:00.241 "seek_hole": false, 00:17:00.241 "seek_data": false, 00:17:00.241 "copy": true, 00:17:00.241 "nvme_iov_md": false 00:17:00.241 }, 00:17:00.241 "memory_domains": [ 00:17:00.241 { 00:17:00.241 "dma_device_id": "system", 00:17:00.241 "dma_device_type": 1 00:17:00.241 }, 00:17:00.241 { 00:17:00.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.241 "dma_device_type": 2 00:17:00.241 } 00:17:00.241 ], 00:17:00.241 "driver_specific": {} 00:17:00.241 } 00:17:00.241 ] 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.241 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.520 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.520 "name": "Existed_Raid", 00:17:00.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.520 "strip_size_kb": 64, 00:17:00.520 "state": "configuring", 00:17:00.520 "raid_level": "concat", 00:17:00.520 "superblock": false, 00:17:00.520 "num_base_bdevs": 4, 00:17:00.520 "num_base_bdevs_discovered": 2, 00:17:00.520 "num_base_bdevs_operational": 4, 00:17:00.520 "base_bdevs_list": [ 00:17:00.520 { 00:17:00.520 "name": "BaseBdev1", 00:17:00.520 "uuid": "567238bd-a115-40d7-ba19-938216b804e2", 00:17:00.520 "is_configured": true, 00:17:00.520 "data_offset": 0, 00:17:00.520 "data_size": 65536 00:17:00.520 }, 00:17:00.520 { 00:17:00.520 "name": "BaseBdev2", 00:17:00.520 "uuid": "1139d0be-aacf-49ac-9b4a-f418a6e98863", 00:17:00.520 "is_configured": true, 00:17:00.520 "data_offset": 0, 00:17:00.520 "data_size": 65536 00:17:00.520 }, 00:17:00.520 { 00:17:00.520 "name": "BaseBdev3", 00:17:00.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.520 "is_configured": false, 00:17:00.520 "data_offset": 0, 00:17:00.520 "data_size": 0 00:17:00.520 }, 00:17:00.520 { 00:17:00.520 "name": "BaseBdev4", 00:17:00.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.520 "is_configured": false, 00:17:00.520 "data_offset": 0, 00:17:00.520 "data_size": 0 00:17:00.520 } 00:17:00.520 ] 00:17:00.520 }' 00:17:00.520 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.520 11:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.454 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:01.454 [2024-07-25 11:28:17.305961] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.454 BaseBdev3 00:17:01.454 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:01.454 11:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:01.454 11:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:01.454 11:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:01.454 11:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:01.454 11:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:01.454 11:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:02.018 11:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:02.018 [ 00:17:02.019 { 00:17:02.019 "name": "BaseBdev3", 00:17:02.019 "aliases": [ 00:17:02.019 "0fd75acb-8989-45af-93e7-9038d4afdf21" 00:17:02.019 ], 00:17:02.019 "product_name": "Malloc disk", 00:17:02.019 "block_size": 512, 00:17:02.019 "num_blocks": 65536, 00:17:02.019 "uuid": "0fd75acb-8989-45af-93e7-9038d4afdf21", 00:17:02.019 "assigned_rate_limits": { 00:17:02.019 "rw_ios_per_sec": 0, 00:17:02.019 "rw_mbytes_per_sec": 0, 00:17:02.019 "r_mbytes_per_sec": 0, 00:17:02.019 "w_mbytes_per_sec": 0 00:17:02.019 }, 00:17:02.019 "claimed": true, 00:17:02.019 "claim_type": "exclusive_write", 00:17:02.019 "zoned": false, 00:17:02.019 "supported_io_types": { 00:17:02.019 "read": true, 00:17:02.019 "write": true, 00:17:02.019 "unmap": true, 00:17:02.019 "flush": true, 00:17:02.019 "reset": true, 00:17:02.019 "nvme_admin": false, 00:17:02.019 "nvme_io": false, 00:17:02.019 "nvme_io_md": false, 00:17:02.019 "write_zeroes": true, 00:17:02.019 "zcopy": true, 00:17:02.019 "get_zone_info": false, 00:17:02.019 "zone_management": false, 00:17:02.019 "zone_append": false, 00:17:02.019 "compare": false, 00:17:02.019 "compare_and_write": false, 00:17:02.019 "abort": true, 00:17:02.019 "seek_hole": false, 00:17:02.019 "seek_data": false, 00:17:02.019 "copy": true, 00:17:02.019 "nvme_iov_md": false 00:17:02.019 }, 00:17:02.019 "memory_domains": [ 00:17:02.019 { 00:17:02.019 "dma_device_id": "system", 00:17:02.019 "dma_device_type": 1 00:17:02.019 }, 00:17:02.019 { 00:17:02.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.019 "dma_device_type": 2 00:17:02.019 } 00:17:02.019 ], 00:17:02.019 "driver_specific": {} 00:17:02.019 } 00:17:02.019 ] 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.276 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.534 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.534 "name": "Existed_Raid", 00:17:02.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.534 "strip_size_kb": 64, 00:17:02.534 "state": "configuring", 00:17:02.534 "raid_level": "concat", 00:17:02.534 "superblock": false, 00:17:02.534 "num_base_bdevs": 4, 00:17:02.534 "num_base_bdevs_discovered": 3, 00:17:02.534 "num_base_bdevs_operational": 4, 00:17:02.534 "base_bdevs_list": [ 00:17:02.534 { 00:17:02.534 "name": "BaseBdev1", 00:17:02.534 "uuid": "567238bd-a115-40d7-ba19-938216b804e2", 00:17:02.534 "is_configured": true, 00:17:02.534 "data_offset": 0, 00:17:02.534 "data_size": 65536 00:17:02.534 }, 00:17:02.534 { 00:17:02.534 "name": "BaseBdev2", 00:17:02.534 "uuid": "1139d0be-aacf-49ac-9b4a-f418a6e98863", 00:17:02.534 "is_configured": true, 00:17:02.534 "data_offset": 0, 00:17:02.534 "data_size": 65536 00:17:02.534 }, 00:17:02.534 { 00:17:02.534 "name": "BaseBdev3", 00:17:02.534 "uuid": "0fd75acb-8989-45af-93e7-9038d4afdf21", 00:17:02.534 "is_configured": true, 00:17:02.534 "data_offset": 0, 00:17:02.534 "data_size": 65536 00:17:02.534 }, 00:17:02.534 { 00:17:02.534 "name": "BaseBdev4", 00:17:02.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.534 "is_configured": false, 00:17:02.534 "data_offset": 0, 00:17:02.534 "data_size": 0 00:17:02.534 } 00:17:02.534 ] 00:17:02.534 }' 00:17:02.534 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.534 11:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.098 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:03.356 [2024-07-25 11:28:19.084394] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.356 [2024-07-25 11:28:19.084470] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:03.356 [2024-07-25 11:28:19.084494] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:03.356 [2024-07-25 11:28:19.084886] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:03.356 [2024-07-25 11:28:19.085109] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:03.356 [2024-07-25 11:28:19.085135] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:03.356 [2024-07-25 11:28:19.085426] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.356 BaseBdev4 00:17:03.356 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:03.356 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:03.356 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:03.356 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:03.356 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:03.356 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:03.356 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.614 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:03.871 [ 00:17:03.871 { 00:17:03.871 "name": "BaseBdev4", 00:17:03.871 "aliases": [ 00:17:03.871 "3a8ed2ef-6398-43d9-ab46-cecc751c4178" 00:17:03.871 ], 00:17:03.871 "product_name": "Malloc disk", 00:17:03.871 "block_size": 512, 00:17:03.871 "num_blocks": 65536, 00:17:03.871 "uuid": "3a8ed2ef-6398-43d9-ab46-cecc751c4178", 00:17:03.871 "assigned_rate_limits": { 00:17:03.871 "rw_ios_per_sec": 0, 00:17:03.871 "rw_mbytes_per_sec": 0, 00:17:03.871 "r_mbytes_per_sec": 0, 00:17:03.871 "w_mbytes_per_sec": 0 00:17:03.871 }, 00:17:03.871 "claimed": true, 00:17:03.871 "claim_type": "exclusive_write", 00:17:03.871 "zoned": false, 00:17:03.872 "supported_io_types": { 00:17:03.872 "read": true, 00:17:03.872 "write": true, 00:17:03.872 "unmap": true, 00:17:03.872 "flush": true, 00:17:03.872 "reset": true, 00:17:03.872 "nvme_admin": false, 00:17:03.872 "nvme_io": false, 00:17:03.872 "nvme_io_md": false, 00:17:03.872 "write_zeroes": true, 00:17:03.872 "zcopy": true, 00:17:03.872 "get_zone_info": false, 00:17:03.872 "zone_management": false, 00:17:03.872 "zone_append": false, 00:17:03.872 "compare": false, 00:17:03.872 "compare_and_write": false, 00:17:03.872 "abort": true, 00:17:03.872 "seek_hole": false, 00:17:03.872 "seek_data": false, 00:17:03.872 "copy": true, 00:17:03.872 "nvme_iov_md": false 00:17:03.872 }, 00:17:03.872 "memory_domains": [ 00:17:03.872 { 00:17:03.872 "dma_device_id": "system", 00:17:03.872 "dma_device_type": 1 00:17:03.872 }, 00:17:03.872 { 00:17:03.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.872 "dma_device_type": 2 00:17:03.872 } 00:17:03.872 ], 00:17:03.872 "driver_specific": {} 00:17:03.872 } 00:17:03.872 ] 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.872 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.129 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.129 "name": "Existed_Raid", 00:17:04.129 "uuid": "455dab6e-bfc7-4faf-81db-7351654d73e0", 00:17:04.129 "strip_size_kb": 64, 00:17:04.129 "state": "online", 00:17:04.129 "raid_level": "concat", 00:17:04.129 "superblock": false, 00:17:04.129 "num_base_bdevs": 4, 00:17:04.129 "num_base_bdevs_discovered": 4, 00:17:04.129 "num_base_bdevs_operational": 4, 00:17:04.129 "base_bdevs_list": [ 00:17:04.129 { 00:17:04.129 "name": "BaseBdev1", 00:17:04.129 "uuid": "567238bd-a115-40d7-ba19-938216b804e2", 00:17:04.129 "is_configured": true, 00:17:04.129 "data_offset": 0, 00:17:04.129 "data_size": 65536 00:17:04.129 }, 00:17:04.129 { 00:17:04.129 "name": "BaseBdev2", 00:17:04.129 "uuid": "1139d0be-aacf-49ac-9b4a-f418a6e98863", 00:17:04.129 "is_configured": true, 00:17:04.129 "data_offset": 0, 00:17:04.129 "data_size": 65536 00:17:04.129 }, 00:17:04.129 { 00:17:04.129 "name": "BaseBdev3", 00:17:04.129 "uuid": "0fd75acb-8989-45af-93e7-9038d4afdf21", 00:17:04.129 "is_configured": true, 00:17:04.129 "data_offset": 0, 00:17:04.129 "data_size": 65536 00:17:04.129 }, 00:17:04.129 { 00:17:04.129 "name": "BaseBdev4", 00:17:04.129 "uuid": "3a8ed2ef-6398-43d9-ab46-cecc751c4178", 00:17:04.129 "is_configured": true, 00:17:04.129 "data_offset": 0, 00:17:04.129 "data_size": 65536 00:17:04.129 } 00:17:04.129 ] 00:17:04.129 }' 00:17:04.129 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.129 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.694 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:04.694 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:04.694 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:04.694 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:04.694 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:04.694 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:04.694 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:04.694 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:04.952 [2024-07-25 11:28:20.737313] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.952 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:04.952 "name": "Existed_Raid", 00:17:04.952 "aliases": [ 00:17:04.952 "455dab6e-bfc7-4faf-81db-7351654d73e0" 00:17:04.952 ], 00:17:04.952 "product_name": "Raid Volume", 00:17:04.952 "block_size": 512, 00:17:04.952 "num_blocks": 262144, 00:17:04.952 "uuid": "455dab6e-bfc7-4faf-81db-7351654d73e0", 00:17:04.952 "assigned_rate_limits": { 00:17:04.952 "rw_ios_per_sec": 0, 00:17:04.952 "rw_mbytes_per_sec": 0, 00:17:04.952 "r_mbytes_per_sec": 0, 00:17:04.952 "w_mbytes_per_sec": 0 00:17:04.952 }, 00:17:04.952 "claimed": false, 00:17:04.952 "zoned": false, 00:17:04.952 "supported_io_types": { 00:17:04.952 "read": true, 00:17:04.952 "write": true, 00:17:04.952 "unmap": true, 00:17:04.952 "flush": true, 00:17:04.952 "reset": true, 00:17:04.952 "nvme_admin": false, 00:17:04.952 "nvme_io": false, 00:17:04.952 "nvme_io_md": false, 00:17:04.952 "write_zeroes": true, 00:17:04.952 "zcopy": false, 00:17:04.952 "get_zone_info": false, 00:17:04.952 "zone_management": false, 00:17:04.952 "zone_append": false, 00:17:04.952 "compare": false, 00:17:04.952 "compare_and_write": false, 00:17:04.952 "abort": false, 00:17:04.952 "seek_hole": false, 00:17:04.952 "seek_data": false, 00:17:04.952 "copy": false, 00:17:04.952 "nvme_iov_md": false 00:17:04.952 }, 00:17:04.952 "memory_domains": [ 00:17:04.952 { 00:17:04.952 "dma_device_id": "system", 00:17:04.952 "dma_device_type": 1 00:17:04.952 }, 00:17:04.952 { 00:17:04.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.952 "dma_device_type": 2 00:17:04.952 }, 00:17:04.952 { 00:17:04.952 "dma_device_id": "system", 00:17:04.952 "dma_device_type": 1 00:17:04.952 }, 00:17:04.952 { 00:17:04.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.952 "dma_device_type": 2 00:17:04.952 }, 00:17:04.952 { 00:17:04.952 "dma_device_id": "system", 00:17:04.952 "dma_device_type": 1 00:17:04.952 }, 00:17:04.952 { 00:17:04.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.952 "dma_device_type": 2 00:17:04.952 }, 00:17:04.952 { 00:17:04.952 "dma_device_id": "system", 00:17:04.952 "dma_device_type": 1 00:17:04.952 }, 00:17:04.952 { 00:17:04.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.952 "dma_device_type": 2 00:17:04.952 } 00:17:04.952 ], 00:17:04.952 "driver_specific": { 00:17:04.952 "raid": { 00:17:04.952 "uuid": "455dab6e-bfc7-4faf-81db-7351654d73e0", 00:17:04.952 "strip_size_kb": 64, 00:17:04.952 "state": "online", 00:17:04.952 "raid_level": "concat", 00:17:04.952 "superblock": false, 00:17:04.952 "num_base_bdevs": 4, 00:17:04.952 "num_base_bdevs_discovered": 4, 00:17:04.952 "num_base_bdevs_operational": 4, 00:17:04.952 "base_bdevs_list": [ 00:17:04.952 { 00:17:04.952 "name": "BaseBdev1", 00:17:04.952 "uuid": "567238bd-a115-40d7-ba19-938216b804e2", 00:17:04.952 "is_configured": true, 00:17:04.952 "data_offset": 0, 00:17:04.952 "data_size": 65536 00:17:04.952 }, 00:17:04.952 { 00:17:04.952 "name": "BaseBdev2", 00:17:04.952 "uuid": "1139d0be-aacf-49ac-9b4a-f418a6e98863", 00:17:04.952 "is_configured": true, 00:17:04.952 "data_offset": 0, 00:17:04.952 "data_size": 65536 00:17:04.952 }, 00:17:04.952 { 00:17:04.953 "name": "BaseBdev3", 00:17:04.953 "uuid": "0fd75acb-8989-45af-93e7-9038d4afdf21", 00:17:04.953 "is_configured": true, 00:17:04.953 "data_offset": 0, 00:17:04.953 "data_size": 65536 00:17:04.953 }, 00:17:04.953 { 00:17:04.953 "name": "BaseBdev4", 00:17:04.953 "uuid": "3a8ed2ef-6398-43d9-ab46-cecc751c4178", 00:17:04.953 "is_configured": true, 00:17:04.953 "data_offset": 0, 00:17:04.953 "data_size": 65536 00:17:04.953 } 00:17:04.953 ] 00:17:04.953 } 00:17:04.953 } 00:17:04.953 }' 00:17:04.953 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:04.953 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:04.953 BaseBdev2 00:17:04.953 BaseBdev3 00:17:04.953 BaseBdev4' 00:17:04.953 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:04.953 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:04.953 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:05.211 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:05.211 "name": "BaseBdev1", 00:17:05.211 "aliases": [ 00:17:05.211 "567238bd-a115-40d7-ba19-938216b804e2" 00:17:05.211 ], 00:17:05.211 "product_name": "Malloc disk", 00:17:05.211 "block_size": 512, 00:17:05.211 "num_blocks": 65536, 00:17:05.211 "uuid": "567238bd-a115-40d7-ba19-938216b804e2", 00:17:05.211 "assigned_rate_limits": { 00:17:05.211 "rw_ios_per_sec": 0, 00:17:05.211 "rw_mbytes_per_sec": 0, 00:17:05.211 "r_mbytes_per_sec": 0, 00:17:05.211 "w_mbytes_per_sec": 0 00:17:05.211 }, 00:17:05.211 "claimed": true, 00:17:05.211 "claim_type": "exclusive_write", 00:17:05.211 "zoned": false, 00:17:05.211 "supported_io_types": { 00:17:05.211 "read": true, 00:17:05.211 "write": true, 00:17:05.211 "unmap": true, 00:17:05.211 "flush": true, 00:17:05.211 "reset": true, 00:17:05.211 "nvme_admin": false, 00:17:05.211 "nvme_io": false, 00:17:05.211 "nvme_io_md": false, 00:17:05.211 "write_zeroes": true, 00:17:05.211 "zcopy": true, 00:17:05.211 "get_zone_info": false, 00:17:05.211 "zone_management": false, 00:17:05.211 "zone_append": false, 00:17:05.211 "compare": false, 00:17:05.211 "compare_and_write": false, 00:17:05.211 "abort": true, 00:17:05.211 "seek_hole": false, 00:17:05.211 "seek_data": false, 00:17:05.211 "copy": true, 00:17:05.211 "nvme_iov_md": false 00:17:05.211 }, 00:17:05.211 "memory_domains": [ 00:17:05.211 { 00:17:05.211 "dma_device_id": "system", 00:17:05.211 "dma_device_type": 1 00:17:05.211 }, 00:17:05.211 { 00:17:05.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.211 "dma_device_type": 2 00:17:05.211 } 00:17:05.211 ], 00:17:05.211 "driver_specific": {} 00:17:05.211 }' 00:17:05.211 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.470 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.470 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:05.470 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.470 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.470 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:05.470 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.470 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.728 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:05.728 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.728 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.728 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.728 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:05.728 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:05.728 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:05.986 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:05.986 "name": "BaseBdev2", 00:17:05.986 "aliases": [ 00:17:05.986 "1139d0be-aacf-49ac-9b4a-f418a6e98863" 00:17:05.986 ], 00:17:05.986 "product_name": "Malloc disk", 00:17:05.986 "block_size": 512, 00:17:05.986 "num_blocks": 65536, 00:17:05.986 "uuid": "1139d0be-aacf-49ac-9b4a-f418a6e98863", 00:17:05.986 "assigned_rate_limits": { 00:17:05.986 "rw_ios_per_sec": 0, 00:17:05.986 "rw_mbytes_per_sec": 0, 00:17:05.986 "r_mbytes_per_sec": 0, 00:17:05.986 "w_mbytes_per_sec": 0 00:17:05.986 }, 00:17:05.986 "claimed": true, 00:17:05.986 "claim_type": "exclusive_write", 00:17:05.986 "zoned": false, 00:17:05.986 "supported_io_types": { 00:17:05.986 "read": true, 00:17:05.986 "write": true, 00:17:05.986 "unmap": true, 00:17:05.986 "flush": true, 00:17:05.986 "reset": true, 00:17:05.986 "nvme_admin": false, 00:17:05.986 "nvme_io": false, 00:17:05.986 "nvme_io_md": false, 00:17:05.986 "write_zeroes": true, 00:17:05.986 "zcopy": true, 00:17:05.986 "get_zone_info": false, 00:17:05.986 "zone_management": false, 00:17:05.986 "zone_append": false, 00:17:05.986 "compare": false, 00:17:05.986 "compare_and_write": false, 00:17:05.986 "abort": true, 00:17:05.986 "seek_hole": false, 00:17:05.986 "seek_data": false, 00:17:05.986 "copy": true, 00:17:05.986 "nvme_iov_md": false 00:17:05.986 }, 00:17:05.986 "memory_domains": [ 00:17:05.986 { 00:17:05.986 "dma_device_id": "system", 00:17:05.986 "dma_device_type": 1 00:17:05.986 }, 00:17:05.986 { 00:17:05.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.986 "dma_device_type": 2 00:17:05.986 } 00:17:05.986 ], 00:17:05.986 "driver_specific": {} 00:17:05.986 }' 00:17:05.986 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.986 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.986 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:05.986 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.244 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.244 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:06.244 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.244 11:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.244 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:06.244 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.244 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.501 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:06.502 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.502 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.502 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:06.760 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.760 "name": "BaseBdev3", 00:17:06.760 "aliases": [ 00:17:06.760 "0fd75acb-8989-45af-93e7-9038d4afdf21" 00:17:06.760 ], 00:17:06.760 "product_name": "Malloc disk", 00:17:06.760 "block_size": 512, 00:17:06.760 "num_blocks": 65536, 00:17:06.760 "uuid": "0fd75acb-8989-45af-93e7-9038d4afdf21", 00:17:06.760 "assigned_rate_limits": { 00:17:06.760 "rw_ios_per_sec": 0, 00:17:06.760 "rw_mbytes_per_sec": 0, 00:17:06.760 "r_mbytes_per_sec": 0, 00:17:06.760 "w_mbytes_per_sec": 0 00:17:06.760 }, 00:17:06.760 "claimed": true, 00:17:06.760 "claim_type": "exclusive_write", 00:17:06.760 "zoned": false, 00:17:06.760 "supported_io_types": { 00:17:06.760 "read": true, 00:17:06.760 "write": true, 00:17:06.760 "unmap": true, 00:17:06.760 "flush": true, 00:17:06.760 "reset": true, 00:17:06.760 "nvme_admin": false, 00:17:06.760 "nvme_io": false, 00:17:06.760 "nvme_io_md": false, 00:17:06.760 "write_zeroes": true, 00:17:06.760 "zcopy": true, 00:17:06.760 "get_zone_info": false, 00:17:06.760 "zone_management": false, 00:17:06.760 "zone_append": false, 00:17:06.760 "compare": false, 00:17:06.760 "compare_and_write": false, 00:17:06.760 "abort": true, 00:17:06.760 "seek_hole": false, 00:17:06.760 "seek_data": false, 00:17:06.760 "copy": true, 00:17:06.760 "nvme_iov_md": false 00:17:06.760 }, 00:17:06.760 "memory_domains": [ 00:17:06.760 { 00:17:06.760 "dma_device_id": "system", 00:17:06.760 "dma_device_type": 1 00:17:06.760 }, 00:17:06.760 { 00:17:06.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.760 "dma_device_type": 2 00:17:06.760 } 00:17:06.760 ], 00:17:06.760 "driver_specific": {} 00:17:06.760 }' 00:17:06.760 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.760 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.760 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:06.760 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.760 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.760 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:06.760 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.019 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.019 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.019 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.019 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.019 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:07.019 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:07.019 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:07.019 11:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:07.277 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:07.277 "name": "BaseBdev4", 00:17:07.277 "aliases": [ 00:17:07.277 "3a8ed2ef-6398-43d9-ab46-cecc751c4178" 00:17:07.277 ], 00:17:07.277 "product_name": "Malloc disk", 00:17:07.277 "block_size": 512, 00:17:07.277 "num_blocks": 65536, 00:17:07.277 "uuid": "3a8ed2ef-6398-43d9-ab46-cecc751c4178", 00:17:07.277 "assigned_rate_limits": { 00:17:07.277 "rw_ios_per_sec": 0, 00:17:07.277 "rw_mbytes_per_sec": 0, 00:17:07.277 "r_mbytes_per_sec": 0, 00:17:07.277 "w_mbytes_per_sec": 0 00:17:07.277 }, 00:17:07.277 "claimed": true, 00:17:07.277 "claim_type": "exclusive_write", 00:17:07.277 "zoned": false, 00:17:07.277 "supported_io_types": { 00:17:07.277 "read": true, 00:17:07.277 "write": true, 00:17:07.277 "unmap": true, 00:17:07.277 "flush": true, 00:17:07.277 "reset": true, 00:17:07.277 "nvme_admin": false, 00:17:07.277 "nvme_io": false, 00:17:07.277 "nvme_io_md": false, 00:17:07.277 "write_zeroes": true, 00:17:07.277 "zcopy": true, 00:17:07.277 "get_zone_info": false, 00:17:07.277 "zone_management": false, 00:17:07.277 "zone_append": false, 00:17:07.277 "compare": false, 00:17:07.277 "compare_and_write": false, 00:17:07.277 "abort": true, 00:17:07.277 "seek_hole": false, 00:17:07.277 "seek_data": false, 00:17:07.277 "copy": true, 00:17:07.277 "nvme_iov_md": false 00:17:07.277 }, 00:17:07.277 "memory_domains": [ 00:17:07.277 { 00:17:07.277 "dma_device_id": "system", 00:17:07.277 "dma_device_type": 1 00:17:07.277 }, 00:17:07.277 { 00:17:07.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.277 "dma_device_type": 2 00:17:07.277 } 00:17:07.277 ], 00:17:07.277 "driver_specific": {} 00:17:07.277 }' 00:17:07.277 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.277 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.536 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:07.536 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.536 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.536 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:07.536 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.536 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.536 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.536 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.793 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.793 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:07.793 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:08.050 [2024-07-25 11:28:23.837880] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.050 [2024-07-25 11:28:23.837961] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.050 [2024-07-25 11:28:23.838034] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.309 11:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.567 11:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.567 "name": "Existed_Raid", 00:17:08.567 "uuid": "455dab6e-bfc7-4faf-81db-7351654d73e0", 00:17:08.567 "strip_size_kb": 64, 00:17:08.567 "state": "offline", 00:17:08.567 "raid_level": "concat", 00:17:08.567 "superblock": false, 00:17:08.567 "num_base_bdevs": 4, 00:17:08.567 "num_base_bdevs_discovered": 3, 00:17:08.567 "num_base_bdevs_operational": 3, 00:17:08.567 "base_bdevs_list": [ 00:17:08.567 { 00:17:08.567 "name": null, 00:17:08.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.567 "is_configured": false, 00:17:08.567 "data_offset": 0, 00:17:08.567 "data_size": 65536 00:17:08.567 }, 00:17:08.567 { 00:17:08.567 "name": "BaseBdev2", 00:17:08.567 "uuid": "1139d0be-aacf-49ac-9b4a-f418a6e98863", 00:17:08.567 "is_configured": true, 00:17:08.567 "data_offset": 0, 00:17:08.567 "data_size": 65536 00:17:08.567 }, 00:17:08.567 { 00:17:08.567 "name": "BaseBdev3", 00:17:08.567 "uuid": "0fd75acb-8989-45af-93e7-9038d4afdf21", 00:17:08.567 "is_configured": true, 00:17:08.567 "data_offset": 0, 00:17:08.567 "data_size": 65536 00:17:08.567 }, 00:17:08.567 { 00:17:08.567 "name": "BaseBdev4", 00:17:08.567 "uuid": "3a8ed2ef-6398-43d9-ab46-cecc751c4178", 00:17:08.567 "is_configured": true, 00:17:08.567 "data_offset": 0, 00:17:08.567 "data_size": 65536 00:17:08.567 } 00:17:08.567 ] 00:17:08.567 }' 00:17:08.568 11:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.568 11:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.135 11:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:09.135 11:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:09.135 11:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:09.135 11:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.395 11:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:09.395 11:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:09.395 11:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:09.653 [2024-07-25 11:28:25.424790] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:09.653 11:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:09.653 11:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:09.653 11:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.653 11:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:10.220 11:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:10.220 11:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.220 11:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:10.220 [2024-07-25 11:28:26.064253] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:10.598 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:10.598 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:10.598 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:10.598 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.598 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:10.598 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.598 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:10.856 [2024-07-25 11:28:26.736521] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:10.856 [2024-07-25 11:28:26.736647] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:11.115 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:11.115 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:11.115 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.115 11:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:11.373 11:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:11.373 11:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:11.373 11:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:11.373 11:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:11.373 11:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:11.373 11:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:11.631 BaseBdev2 00:17:11.631 11:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:11.631 11:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:11.631 11:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:11.631 11:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:11.631 11:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:11.631 11:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:11.631 11:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.889 11:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:12.148 [ 00:17:12.148 { 00:17:12.148 "name": "BaseBdev2", 00:17:12.148 "aliases": [ 00:17:12.148 "72e4fa4a-fe2b-442d-9e10-2177a21651be" 00:17:12.148 ], 00:17:12.148 "product_name": "Malloc disk", 00:17:12.148 "block_size": 512, 00:17:12.148 "num_blocks": 65536, 00:17:12.148 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:12.148 "assigned_rate_limits": { 00:17:12.148 "rw_ios_per_sec": 0, 00:17:12.148 "rw_mbytes_per_sec": 0, 00:17:12.148 "r_mbytes_per_sec": 0, 00:17:12.148 "w_mbytes_per_sec": 0 00:17:12.148 }, 00:17:12.148 "claimed": false, 00:17:12.148 "zoned": false, 00:17:12.148 "supported_io_types": { 00:17:12.148 "read": true, 00:17:12.148 "write": true, 00:17:12.148 "unmap": true, 00:17:12.148 "flush": true, 00:17:12.148 "reset": true, 00:17:12.148 "nvme_admin": false, 00:17:12.148 "nvme_io": false, 00:17:12.148 "nvme_io_md": false, 00:17:12.148 "write_zeroes": true, 00:17:12.148 "zcopy": true, 00:17:12.148 "get_zone_info": false, 00:17:12.148 "zone_management": false, 00:17:12.148 "zone_append": false, 00:17:12.148 "compare": false, 00:17:12.148 "compare_and_write": false, 00:17:12.148 "abort": true, 00:17:12.148 "seek_hole": false, 00:17:12.148 "seek_data": false, 00:17:12.148 "copy": true, 00:17:12.148 "nvme_iov_md": false 00:17:12.148 }, 00:17:12.148 "memory_domains": [ 00:17:12.148 { 00:17:12.148 "dma_device_id": "system", 00:17:12.148 "dma_device_type": 1 00:17:12.148 }, 00:17:12.148 { 00:17:12.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.148 "dma_device_type": 2 00:17:12.148 } 00:17:12.148 ], 00:17:12.148 "driver_specific": {} 00:17:12.148 } 00:17:12.148 ] 00:17:12.148 11:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:12.148 11:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:12.148 11:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:12.148 11:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:12.407 BaseBdev3 00:17:12.407 11:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:12.407 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:12.407 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:12.407 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:12.407 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:12.407 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:12.407 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:12.665 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:12.923 [ 00:17:12.923 { 00:17:12.923 "name": "BaseBdev3", 00:17:12.923 "aliases": [ 00:17:12.923 "07d7e914-e141-4c72-9438-fe8bcef8de11" 00:17:12.923 ], 00:17:12.923 "product_name": "Malloc disk", 00:17:12.923 "block_size": 512, 00:17:12.923 "num_blocks": 65536, 00:17:12.923 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:12.923 "assigned_rate_limits": { 00:17:12.923 "rw_ios_per_sec": 0, 00:17:12.923 "rw_mbytes_per_sec": 0, 00:17:12.923 "r_mbytes_per_sec": 0, 00:17:12.923 "w_mbytes_per_sec": 0 00:17:12.923 }, 00:17:12.923 "claimed": false, 00:17:12.923 "zoned": false, 00:17:12.923 "supported_io_types": { 00:17:12.923 "read": true, 00:17:12.923 "write": true, 00:17:12.923 "unmap": true, 00:17:12.923 "flush": true, 00:17:12.923 "reset": true, 00:17:12.923 "nvme_admin": false, 00:17:12.923 "nvme_io": false, 00:17:12.923 "nvme_io_md": false, 00:17:12.923 "write_zeroes": true, 00:17:12.923 "zcopy": true, 00:17:12.923 "get_zone_info": false, 00:17:12.923 "zone_management": false, 00:17:12.923 "zone_append": false, 00:17:12.923 "compare": false, 00:17:12.923 "compare_and_write": false, 00:17:12.923 "abort": true, 00:17:12.923 "seek_hole": false, 00:17:12.923 "seek_data": false, 00:17:12.923 "copy": true, 00:17:12.923 "nvme_iov_md": false 00:17:12.923 }, 00:17:12.923 "memory_domains": [ 00:17:12.923 { 00:17:12.923 "dma_device_id": "system", 00:17:12.923 "dma_device_type": 1 00:17:12.923 }, 00:17:12.923 { 00:17:12.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.923 "dma_device_type": 2 00:17:12.923 } 00:17:12.923 ], 00:17:12.923 "driver_specific": {} 00:17:12.923 } 00:17:12.923 ] 00:17:12.923 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:12.923 11:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:12.923 11:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:12.923 11:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:13.182 BaseBdev4 00:17:13.182 11:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:13.182 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:13.182 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:13.182 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:13.182 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:13.182 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:13.182 11:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.440 11:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:13.698 [ 00:17:13.698 { 00:17:13.698 "name": "BaseBdev4", 00:17:13.698 "aliases": [ 00:17:13.698 "e4462462-da75-40b6-9914-6e1165ea7377" 00:17:13.698 ], 00:17:13.698 "product_name": "Malloc disk", 00:17:13.698 "block_size": 512, 00:17:13.698 "num_blocks": 65536, 00:17:13.698 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:13.698 "assigned_rate_limits": { 00:17:13.698 "rw_ios_per_sec": 0, 00:17:13.698 "rw_mbytes_per_sec": 0, 00:17:13.698 "r_mbytes_per_sec": 0, 00:17:13.698 "w_mbytes_per_sec": 0 00:17:13.698 }, 00:17:13.698 "claimed": false, 00:17:13.698 "zoned": false, 00:17:13.698 "supported_io_types": { 00:17:13.698 "read": true, 00:17:13.698 "write": true, 00:17:13.698 "unmap": true, 00:17:13.698 "flush": true, 00:17:13.698 "reset": true, 00:17:13.698 "nvme_admin": false, 00:17:13.698 "nvme_io": false, 00:17:13.698 "nvme_io_md": false, 00:17:13.698 "write_zeroes": true, 00:17:13.698 "zcopy": true, 00:17:13.698 "get_zone_info": false, 00:17:13.699 "zone_management": false, 00:17:13.699 "zone_append": false, 00:17:13.699 "compare": false, 00:17:13.699 "compare_and_write": false, 00:17:13.699 "abort": true, 00:17:13.699 "seek_hole": false, 00:17:13.699 "seek_data": false, 00:17:13.699 "copy": true, 00:17:13.699 "nvme_iov_md": false 00:17:13.699 }, 00:17:13.699 "memory_domains": [ 00:17:13.699 { 00:17:13.699 "dma_device_id": "system", 00:17:13.699 "dma_device_type": 1 00:17:13.699 }, 00:17:13.699 { 00:17:13.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.699 "dma_device_type": 2 00:17:13.699 } 00:17:13.699 ], 00:17:13.699 "driver_specific": {} 00:17:13.699 } 00:17:13.699 ] 00:17:13.699 11:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:13.699 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:13.699 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:13.699 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:13.957 [2024-07-25 11:28:29.742319] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.957 [2024-07-25 11:28:29.742388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.957 [2024-07-25 11:28:29.742424] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.957 [2024-07-25 11:28:29.744750] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.957 [2024-07-25 11:28:29.744831] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.957 11:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.215 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.215 "name": "Existed_Raid", 00:17:14.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.216 "strip_size_kb": 64, 00:17:14.216 "state": "configuring", 00:17:14.216 "raid_level": "concat", 00:17:14.216 "superblock": false, 00:17:14.216 "num_base_bdevs": 4, 00:17:14.216 "num_base_bdevs_discovered": 3, 00:17:14.216 "num_base_bdevs_operational": 4, 00:17:14.216 "base_bdevs_list": [ 00:17:14.216 { 00:17:14.216 "name": "BaseBdev1", 00:17:14.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.216 "is_configured": false, 00:17:14.216 "data_offset": 0, 00:17:14.216 "data_size": 0 00:17:14.216 }, 00:17:14.216 { 00:17:14.216 "name": "BaseBdev2", 00:17:14.216 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:14.216 "is_configured": true, 00:17:14.216 "data_offset": 0, 00:17:14.216 "data_size": 65536 00:17:14.216 }, 00:17:14.216 { 00:17:14.216 "name": "BaseBdev3", 00:17:14.216 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:14.216 "is_configured": true, 00:17:14.216 "data_offset": 0, 00:17:14.216 "data_size": 65536 00:17:14.216 }, 00:17:14.216 { 00:17:14.216 "name": "BaseBdev4", 00:17:14.216 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:14.216 "is_configured": true, 00:17:14.216 "data_offset": 0, 00:17:14.216 "data_size": 65536 00:17:14.216 } 00:17:14.216 ] 00:17:14.216 }' 00:17:14.216 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.216 11:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.150 11:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:15.150 [2024-07-25 11:28:31.022702] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.409 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.666 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.666 "name": "Existed_Raid", 00:17:15.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.666 "strip_size_kb": 64, 00:17:15.666 "state": "configuring", 00:17:15.666 "raid_level": "concat", 00:17:15.666 "superblock": false, 00:17:15.666 "num_base_bdevs": 4, 00:17:15.666 "num_base_bdevs_discovered": 2, 00:17:15.666 "num_base_bdevs_operational": 4, 00:17:15.666 "base_bdevs_list": [ 00:17:15.666 { 00:17:15.666 "name": "BaseBdev1", 00:17:15.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.666 "is_configured": false, 00:17:15.666 "data_offset": 0, 00:17:15.666 "data_size": 0 00:17:15.666 }, 00:17:15.666 { 00:17:15.666 "name": null, 00:17:15.666 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:15.666 "is_configured": false, 00:17:15.666 "data_offset": 0, 00:17:15.667 "data_size": 65536 00:17:15.667 }, 00:17:15.667 { 00:17:15.667 "name": "BaseBdev3", 00:17:15.667 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:15.667 "is_configured": true, 00:17:15.667 "data_offset": 0, 00:17:15.667 "data_size": 65536 00:17:15.667 }, 00:17:15.667 { 00:17:15.667 "name": "BaseBdev4", 00:17:15.667 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:15.667 "is_configured": true, 00:17:15.667 "data_offset": 0, 00:17:15.667 "data_size": 65536 00:17:15.667 } 00:17:15.667 ] 00:17:15.667 }' 00:17:15.667 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.667 11:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.233 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.233 11:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:16.533 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:16.533 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:16.806 [2024-07-25 11:28:32.611367] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.806 BaseBdev1 00:17:16.806 11:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:16.806 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:16.806 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:16.806 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:16.806 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:16.806 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:16.806 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.064 11:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:17.323 [ 00:17:17.323 { 00:17:17.323 "name": "BaseBdev1", 00:17:17.323 "aliases": [ 00:17:17.323 "a5d17e54-7094-4e7f-9e1e-d965985b018f" 00:17:17.323 ], 00:17:17.323 "product_name": "Malloc disk", 00:17:17.323 "block_size": 512, 00:17:17.323 "num_blocks": 65536, 00:17:17.323 "uuid": "a5d17e54-7094-4e7f-9e1e-d965985b018f", 00:17:17.323 "assigned_rate_limits": { 00:17:17.323 "rw_ios_per_sec": 0, 00:17:17.323 "rw_mbytes_per_sec": 0, 00:17:17.323 "r_mbytes_per_sec": 0, 00:17:17.323 "w_mbytes_per_sec": 0 00:17:17.323 }, 00:17:17.323 "claimed": true, 00:17:17.323 "claim_type": "exclusive_write", 00:17:17.323 "zoned": false, 00:17:17.323 "supported_io_types": { 00:17:17.323 "read": true, 00:17:17.323 "write": true, 00:17:17.323 "unmap": true, 00:17:17.323 "flush": true, 00:17:17.323 "reset": true, 00:17:17.323 "nvme_admin": false, 00:17:17.323 "nvme_io": false, 00:17:17.323 "nvme_io_md": false, 00:17:17.323 "write_zeroes": true, 00:17:17.323 "zcopy": true, 00:17:17.323 "get_zone_info": false, 00:17:17.323 "zone_management": false, 00:17:17.323 "zone_append": false, 00:17:17.323 "compare": false, 00:17:17.323 "compare_and_write": false, 00:17:17.323 "abort": true, 00:17:17.323 "seek_hole": false, 00:17:17.323 "seek_data": false, 00:17:17.323 "copy": true, 00:17:17.323 "nvme_iov_md": false 00:17:17.323 }, 00:17:17.323 "memory_domains": [ 00:17:17.323 { 00:17:17.323 "dma_device_id": "system", 00:17:17.323 "dma_device_type": 1 00:17:17.323 }, 00:17:17.323 { 00:17:17.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.323 "dma_device_type": 2 00:17:17.323 } 00:17:17.323 ], 00:17:17.323 "driver_specific": {} 00:17:17.323 } 00:17:17.323 ] 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.585 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.843 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.843 "name": "Existed_Raid", 00:17:17.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.843 "strip_size_kb": 64, 00:17:17.843 "state": "configuring", 00:17:17.843 "raid_level": "concat", 00:17:17.843 "superblock": false, 00:17:17.843 "num_base_bdevs": 4, 00:17:17.843 "num_base_bdevs_discovered": 3, 00:17:17.843 "num_base_bdevs_operational": 4, 00:17:17.843 "base_bdevs_list": [ 00:17:17.843 { 00:17:17.843 "name": "BaseBdev1", 00:17:17.843 "uuid": "a5d17e54-7094-4e7f-9e1e-d965985b018f", 00:17:17.843 "is_configured": true, 00:17:17.843 "data_offset": 0, 00:17:17.843 "data_size": 65536 00:17:17.843 }, 00:17:17.843 { 00:17:17.843 "name": null, 00:17:17.843 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:17.843 "is_configured": false, 00:17:17.843 "data_offset": 0, 00:17:17.843 "data_size": 65536 00:17:17.843 }, 00:17:17.843 { 00:17:17.843 "name": "BaseBdev3", 00:17:17.843 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:17.843 "is_configured": true, 00:17:17.843 "data_offset": 0, 00:17:17.843 "data_size": 65536 00:17:17.843 }, 00:17:17.843 { 00:17:17.843 "name": "BaseBdev4", 00:17:17.843 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:17.843 "is_configured": true, 00:17:17.843 "data_offset": 0, 00:17:17.843 "data_size": 65536 00:17:17.843 } 00:17:17.843 ] 00:17:17.843 }' 00:17:17.843 11:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.843 11:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.409 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.409 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:18.668 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:18.668 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:18.926 [2024-07-25 11:28:34.704108] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:18.926 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:18.926 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.926 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.926 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:18.926 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.927 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:18.927 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.927 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.927 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.927 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.927 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.927 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.185 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.185 "name": "Existed_Raid", 00:17:19.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.185 "strip_size_kb": 64, 00:17:19.185 "state": "configuring", 00:17:19.185 "raid_level": "concat", 00:17:19.185 "superblock": false, 00:17:19.185 "num_base_bdevs": 4, 00:17:19.185 "num_base_bdevs_discovered": 2, 00:17:19.185 "num_base_bdevs_operational": 4, 00:17:19.185 "base_bdevs_list": [ 00:17:19.185 { 00:17:19.185 "name": "BaseBdev1", 00:17:19.185 "uuid": "a5d17e54-7094-4e7f-9e1e-d965985b018f", 00:17:19.185 "is_configured": true, 00:17:19.185 "data_offset": 0, 00:17:19.185 "data_size": 65536 00:17:19.185 }, 00:17:19.185 { 00:17:19.185 "name": null, 00:17:19.185 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:19.185 "is_configured": false, 00:17:19.185 "data_offset": 0, 00:17:19.185 "data_size": 65536 00:17:19.185 }, 00:17:19.185 { 00:17:19.185 "name": null, 00:17:19.185 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:19.185 "is_configured": false, 00:17:19.185 "data_offset": 0, 00:17:19.185 "data_size": 65536 00:17:19.185 }, 00:17:19.185 { 00:17:19.185 "name": "BaseBdev4", 00:17:19.185 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:19.185 "is_configured": true, 00:17:19.185 "data_offset": 0, 00:17:19.185 "data_size": 65536 00:17:19.185 } 00:17:19.185 ] 00:17:19.185 }' 00:17:19.185 11:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.185 11:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.118 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.118 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:20.118 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:20.118 11:28:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:20.377 [2024-07-25 11:28:36.152481] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.377 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.636 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.636 "name": "Existed_Raid", 00:17:20.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.636 "strip_size_kb": 64, 00:17:20.636 "state": "configuring", 00:17:20.636 "raid_level": "concat", 00:17:20.636 "superblock": false, 00:17:20.636 "num_base_bdevs": 4, 00:17:20.636 "num_base_bdevs_discovered": 3, 00:17:20.636 "num_base_bdevs_operational": 4, 00:17:20.636 "base_bdevs_list": [ 00:17:20.636 { 00:17:20.636 "name": "BaseBdev1", 00:17:20.636 "uuid": "a5d17e54-7094-4e7f-9e1e-d965985b018f", 00:17:20.636 "is_configured": true, 00:17:20.636 "data_offset": 0, 00:17:20.636 "data_size": 65536 00:17:20.636 }, 00:17:20.636 { 00:17:20.636 "name": null, 00:17:20.636 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:20.636 "is_configured": false, 00:17:20.636 "data_offset": 0, 00:17:20.636 "data_size": 65536 00:17:20.636 }, 00:17:20.636 { 00:17:20.636 "name": "BaseBdev3", 00:17:20.636 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:20.636 "is_configured": true, 00:17:20.636 "data_offset": 0, 00:17:20.636 "data_size": 65536 00:17:20.636 }, 00:17:20.636 { 00:17:20.636 "name": "BaseBdev4", 00:17:20.636 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:20.636 "is_configured": true, 00:17:20.636 "data_offset": 0, 00:17:20.636 "data_size": 65536 00:17:20.636 } 00:17:20.636 ] 00:17:20.636 }' 00:17:20.636 11:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.636 11:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.203 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.203 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:21.461 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:21.461 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:21.718 [2024-07-25 11:28:37.532906] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.976 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.234 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.234 "name": "Existed_Raid", 00:17:22.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.234 "strip_size_kb": 64, 00:17:22.234 "state": "configuring", 00:17:22.234 "raid_level": "concat", 00:17:22.234 "superblock": false, 00:17:22.234 "num_base_bdevs": 4, 00:17:22.234 "num_base_bdevs_discovered": 2, 00:17:22.234 "num_base_bdevs_operational": 4, 00:17:22.234 "base_bdevs_list": [ 00:17:22.234 { 00:17:22.234 "name": null, 00:17:22.234 "uuid": "a5d17e54-7094-4e7f-9e1e-d965985b018f", 00:17:22.234 "is_configured": false, 00:17:22.234 "data_offset": 0, 00:17:22.234 "data_size": 65536 00:17:22.234 }, 00:17:22.234 { 00:17:22.234 "name": null, 00:17:22.234 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:22.234 "is_configured": false, 00:17:22.234 "data_offset": 0, 00:17:22.234 "data_size": 65536 00:17:22.234 }, 00:17:22.234 { 00:17:22.234 "name": "BaseBdev3", 00:17:22.234 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:22.234 "is_configured": true, 00:17:22.234 "data_offset": 0, 00:17:22.234 "data_size": 65536 00:17:22.234 }, 00:17:22.234 { 00:17:22.234 "name": "BaseBdev4", 00:17:22.234 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:22.234 "is_configured": true, 00:17:22.234 "data_offset": 0, 00:17:22.234 "data_size": 65536 00:17:22.234 } 00:17:22.234 ] 00:17:22.234 }' 00:17:22.234 11:28:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.234 11:28:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.800 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.800 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:23.059 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:23.059 11:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:23.327 [2024-07-25 11:28:39.139946] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.327 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.596 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.596 "name": "Existed_Raid", 00:17:23.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.596 "strip_size_kb": 64, 00:17:23.596 "state": "configuring", 00:17:23.596 "raid_level": "concat", 00:17:23.596 "superblock": false, 00:17:23.596 "num_base_bdevs": 4, 00:17:23.596 "num_base_bdevs_discovered": 3, 00:17:23.596 "num_base_bdevs_operational": 4, 00:17:23.596 "base_bdevs_list": [ 00:17:23.596 { 00:17:23.596 "name": null, 00:17:23.596 "uuid": "a5d17e54-7094-4e7f-9e1e-d965985b018f", 00:17:23.596 "is_configured": false, 00:17:23.596 "data_offset": 0, 00:17:23.596 "data_size": 65536 00:17:23.596 }, 00:17:23.596 { 00:17:23.596 "name": "BaseBdev2", 00:17:23.596 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:23.596 "is_configured": true, 00:17:23.596 "data_offset": 0, 00:17:23.596 "data_size": 65536 00:17:23.596 }, 00:17:23.596 { 00:17:23.596 "name": "BaseBdev3", 00:17:23.596 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:23.596 "is_configured": true, 00:17:23.596 "data_offset": 0, 00:17:23.596 "data_size": 65536 00:17:23.596 }, 00:17:23.596 { 00:17:23.596 "name": "BaseBdev4", 00:17:23.596 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:23.596 "is_configured": true, 00:17:23.596 "data_offset": 0, 00:17:23.596 "data_size": 65536 00:17:23.596 } 00:17:23.596 ] 00:17:23.596 }' 00:17:23.596 11:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.596 11:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.165 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.165 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:24.423 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:24.682 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.682 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:24.682 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a5d17e54-7094-4e7f-9e1e-d965985b018f 00:17:25.247 [2024-07-25 11:28:40.872185] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:25.247 [2024-07-25 11:28:40.872247] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:25.247 [2024-07-25 11:28:40.872268] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:25.247 [2024-07-25 11:28:40.872648] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:25.247 [2024-07-25 11:28:40.872847] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:25.247 [2024-07-25 11:28:40.872864] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:25.247 [2024-07-25 11:28:40.873170] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.247 NewBaseBdev 00:17:25.247 11:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:25.247 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:25.247 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:25.247 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:25.247 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:25.247 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:25.247 11:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.505 11:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:25.763 [ 00:17:25.763 { 00:17:25.763 "name": "NewBaseBdev", 00:17:25.763 "aliases": [ 00:17:25.763 "a5d17e54-7094-4e7f-9e1e-d965985b018f" 00:17:25.763 ], 00:17:25.763 "product_name": "Malloc disk", 00:17:25.763 "block_size": 512, 00:17:25.763 "num_blocks": 65536, 00:17:25.763 "uuid": "a5d17e54-7094-4e7f-9e1e-d965985b018f", 00:17:25.763 "assigned_rate_limits": { 00:17:25.763 "rw_ios_per_sec": 0, 00:17:25.763 "rw_mbytes_per_sec": 0, 00:17:25.763 "r_mbytes_per_sec": 0, 00:17:25.763 "w_mbytes_per_sec": 0 00:17:25.763 }, 00:17:25.763 "claimed": true, 00:17:25.763 "claim_type": "exclusive_write", 00:17:25.763 "zoned": false, 00:17:25.763 "supported_io_types": { 00:17:25.763 "read": true, 00:17:25.763 "write": true, 00:17:25.763 "unmap": true, 00:17:25.763 "flush": true, 00:17:25.763 "reset": true, 00:17:25.763 "nvme_admin": false, 00:17:25.763 "nvme_io": false, 00:17:25.763 "nvme_io_md": false, 00:17:25.763 "write_zeroes": true, 00:17:25.763 "zcopy": true, 00:17:25.763 "get_zone_info": false, 00:17:25.763 "zone_management": false, 00:17:25.763 "zone_append": false, 00:17:25.763 "compare": false, 00:17:25.763 "compare_and_write": false, 00:17:25.763 "abort": true, 00:17:25.763 "seek_hole": false, 00:17:25.763 "seek_data": false, 00:17:25.763 "copy": true, 00:17:25.763 "nvme_iov_md": false 00:17:25.763 }, 00:17:25.763 "memory_domains": [ 00:17:25.763 { 00:17:25.763 "dma_device_id": "system", 00:17:25.763 "dma_device_type": 1 00:17:25.763 }, 00:17:25.763 { 00:17:25.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.763 "dma_device_type": 2 00:17:25.763 } 00:17:25.763 ], 00:17:25.763 "driver_specific": {} 00:17:25.763 } 00:17:25.763 ] 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.763 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.023 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.023 "name": "Existed_Raid", 00:17:26.023 "uuid": "e4971c7c-8e8b-4599-b8d3-762b45d1ec10", 00:17:26.023 "strip_size_kb": 64, 00:17:26.023 "state": "online", 00:17:26.023 "raid_level": "concat", 00:17:26.023 "superblock": false, 00:17:26.023 "num_base_bdevs": 4, 00:17:26.023 "num_base_bdevs_discovered": 4, 00:17:26.023 "num_base_bdevs_operational": 4, 00:17:26.023 "base_bdevs_list": [ 00:17:26.023 { 00:17:26.023 "name": "NewBaseBdev", 00:17:26.023 "uuid": "a5d17e54-7094-4e7f-9e1e-d965985b018f", 00:17:26.023 "is_configured": true, 00:17:26.023 "data_offset": 0, 00:17:26.023 "data_size": 65536 00:17:26.023 }, 00:17:26.023 { 00:17:26.023 "name": "BaseBdev2", 00:17:26.023 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:26.023 "is_configured": true, 00:17:26.023 "data_offset": 0, 00:17:26.023 "data_size": 65536 00:17:26.023 }, 00:17:26.023 { 00:17:26.023 "name": "BaseBdev3", 00:17:26.023 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:26.023 "is_configured": true, 00:17:26.023 "data_offset": 0, 00:17:26.023 "data_size": 65536 00:17:26.023 }, 00:17:26.023 { 00:17:26.023 "name": "BaseBdev4", 00:17:26.023 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:26.023 "is_configured": true, 00:17:26.023 "data_offset": 0, 00:17:26.023 "data_size": 65536 00:17:26.023 } 00:17:26.023 ] 00:17:26.023 }' 00:17:26.023 11:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.023 11:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.597 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:26.597 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:26.597 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:26.597 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:26.597 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:26.597 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:26.597 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:26.597 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:26.862 [2024-07-25 11:28:42.585150] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.862 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:26.862 "name": "Existed_Raid", 00:17:26.862 "aliases": [ 00:17:26.862 "e4971c7c-8e8b-4599-b8d3-762b45d1ec10" 00:17:26.862 ], 00:17:26.862 "product_name": "Raid Volume", 00:17:26.862 "block_size": 512, 00:17:26.862 "num_blocks": 262144, 00:17:26.862 "uuid": "e4971c7c-8e8b-4599-b8d3-762b45d1ec10", 00:17:26.862 "assigned_rate_limits": { 00:17:26.862 "rw_ios_per_sec": 0, 00:17:26.862 "rw_mbytes_per_sec": 0, 00:17:26.862 "r_mbytes_per_sec": 0, 00:17:26.862 "w_mbytes_per_sec": 0 00:17:26.862 }, 00:17:26.862 "claimed": false, 00:17:26.862 "zoned": false, 00:17:26.862 "supported_io_types": { 00:17:26.862 "read": true, 00:17:26.862 "write": true, 00:17:26.862 "unmap": true, 00:17:26.862 "flush": true, 00:17:26.862 "reset": true, 00:17:26.862 "nvme_admin": false, 00:17:26.862 "nvme_io": false, 00:17:26.862 "nvme_io_md": false, 00:17:26.862 "write_zeroes": true, 00:17:26.862 "zcopy": false, 00:17:26.862 "get_zone_info": false, 00:17:26.862 "zone_management": false, 00:17:26.862 "zone_append": false, 00:17:26.862 "compare": false, 00:17:26.862 "compare_and_write": false, 00:17:26.862 "abort": false, 00:17:26.862 "seek_hole": false, 00:17:26.862 "seek_data": false, 00:17:26.862 "copy": false, 00:17:26.862 "nvme_iov_md": false 00:17:26.862 }, 00:17:26.862 "memory_domains": [ 00:17:26.862 { 00:17:26.862 "dma_device_id": "system", 00:17:26.862 "dma_device_type": 1 00:17:26.862 }, 00:17:26.862 { 00:17:26.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.862 "dma_device_type": 2 00:17:26.862 }, 00:17:26.862 { 00:17:26.862 "dma_device_id": "system", 00:17:26.862 "dma_device_type": 1 00:17:26.862 }, 00:17:26.862 { 00:17:26.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.862 "dma_device_type": 2 00:17:26.862 }, 00:17:26.862 { 00:17:26.862 "dma_device_id": "system", 00:17:26.862 "dma_device_type": 1 00:17:26.862 }, 00:17:26.862 { 00:17:26.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.862 "dma_device_type": 2 00:17:26.862 }, 00:17:26.862 { 00:17:26.862 "dma_device_id": "system", 00:17:26.862 "dma_device_type": 1 00:17:26.862 }, 00:17:26.862 { 00:17:26.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.862 "dma_device_type": 2 00:17:26.862 } 00:17:26.862 ], 00:17:26.862 "driver_specific": { 00:17:26.862 "raid": { 00:17:26.862 "uuid": "e4971c7c-8e8b-4599-b8d3-762b45d1ec10", 00:17:26.862 "strip_size_kb": 64, 00:17:26.862 "state": "online", 00:17:26.862 "raid_level": "concat", 00:17:26.862 "superblock": false, 00:17:26.862 "num_base_bdevs": 4, 00:17:26.862 "num_base_bdevs_discovered": 4, 00:17:26.862 "num_base_bdevs_operational": 4, 00:17:26.862 "base_bdevs_list": [ 00:17:26.862 { 00:17:26.862 "name": "NewBaseBdev", 00:17:26.862 "uuid": "a5d17e54-7094-4e7f-9e1e-d965985b018f", 00:17:26.862 "is_configured": true, 00:17:26.862 "data_offset": 0, 00:17:26.862 "data_size": 65536 00:17:26.862 }, 00:17:26.862 { 00:17:26.862 "name": "BaseBdev2", 00:17:26.863 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:26.863 "is_configured": true, 00:17:26.863 "data_offset": 0, 00:17:26.863 "data_size": 65536 00:17:26.863 }, 00:17:26.863 { 00:17:26.863 "name": "BaseBdev3", 00:17:26.863 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:26.863 "is_configured": true, 00:17:26.863 "data_offset": 0, 00:17:26.863 "data_size": 65536 00:17:26.863 }, 00:17:26.863 { 00:17:26.863 "name": "BaseBdev4", 00:17:26.863 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:26.863 "is_configured": true, 00:17:26.863 "data_offset": 0, 00:17:26.863 "data_size": 65536 00:17:26.863 } 00:17:26.863 ] 00:17:26.863 } 00:17:26.863 } 00:17:26.863 }' 00:17:26.863 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.863 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:26.863 BaseBdev2 00:17:26.863 BaseBdev3 00:17:26.863 BaseBdev4' 00:17:26.863 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.863 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:26.863 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.128 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.128 "name": "NewBaseBdev", 00:17:27.128 "aliases": [ 00:17:27.128 "a5d17e54-7094-4e7f-9e1e-d965985b018f" 00:17:27.128 ], 00:17:27.128 "product_name": "Malloc disk", 00:17:27.128 "block_size": 512, 00:17:27.128 "num_blocks": 65536, 00:17:27.128 "uuid": "a5d17e54-7094-4e7f-9e1e-d965985b018f", 00:17:27.128 "assigned_rate_limits": { 00:17:27.128 "rw_ios_per_sec": 0, 00:17:27.128 "rw_mbytes_per_sec": 0, 00:17:27.128 "r_mbytes_per_sec": 0, 00:17:27.128 "w_mbytes_per_sec": 0 00:17:27.128 }, 00:17:27.128 "claimed": true, 00:17:27.128 "claim_type": "exclusive_write", 00:17:27.128 "zoned": false, 00:17:27.128 "supported_io_types": { 00:17:27.128 "read": true, 00:17:27.128 "write": true, 00:17:27.128 "unmap": true, 00:17:27.128 "flush": true, 00:17:27.128 "reset": true, 00:17:27.128 "nvme_admin": false, 00:17:27.128 "nvme_io": false, 00:17:27.128 "nvme_io_md": false, 00:17:27.128 "write_zeroes": true, 00:17:27.128 "zcopy": true, 00:17:27.128 "get_zone_info": false, 00:17:27.128 "zone_management": false, 00:17:27.128 "zone_append": false, 00:17:27.128 "compare": false, 00:17:27.128 "compare_and_write": false, 00:17:27.128 "abort": true, 00:17:27.128 "seek_hole": false, 00:17:27.128 "seek_data": false, 00:17:27.128 "copy": true, 00:17:27.128 "nvme_iov_md": false 00:17:27.128 }, 00:17:27.128 "memory_domains": [ 00:17:27.128 { 00:17:27.128 "dma_device_id": "system", 00:17:27.128 "dma_device_type": 1 00:17:27.128 }, 00:17:27.128 { 00:17:27.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.128 "dma_device_type": 2 00:17:27.128 } 00:17:27.128 ], 00:17:27.128 "driver_specific": {} 00:17:27.128 }' 00:17:27.128 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.128 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.128 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.128 11:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.397 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.397 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.397 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.397 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.397 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.397 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.397 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.666 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.666 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.666 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:27.666 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.939 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.939 "name": "BaseBdev2", 00:17:27.939 "aliases": [ 00:17:27.939 "72e4fa4a-fe2b-442d-9e10-2177a21651be" 00:17:27.939 ], 00:17:27.939 "product_name": "Malloc disk", 00:17:27.939 "block_size": 512, 00:17:27.939 "num_blocks": 65536, 00:17:27.939 "uuid": "72e4fa4a-fe2b-442d-9e10-2177a21651be", 00:17:27.939 "assigned_rate_limits": { 00:17:27.939 "rw_ios_per_sec": 0, 00:17:27.939 "rw_mbytes_per_sec": 0, 00:17:27.939 "r_mbytes_per_sec": 0, 00:17:27.939 "w_mbytes_per_sec": 0 00:17:27.939 }, 00:17:27.939 "claimed": true, 00:17:27.939 "claim_type": "exclusive_write", 00:17:27.939 "zoned": false, 00:17:27.939 "supported_io_types": { 00:17:27.939 "read": true, 00:17:27.939 "write": true, 00:17:27.939 "unmap": true, 00:17:27.939 "flush": true, 00:17:27.939 "reset": true, 00:17:27.939 "nvme_admin": false, 00:17:27.939 "nvme_io": false, 00:17:27.939 "nvme_io_md": false, 00:17:27.940 "write_zeroes": true, 00:17:27.940 "zcopy": true, 00:17:27.940 "get_zone_info": false, 00:17:27.940 "zone_management": false, 00:17:27.940 "zone_append": false, 00:17:27.940 "compare": false, 00:17:27.940 "compare_and_write": false, 00:17:27.940 "abort": true, 00:17:27.940 "seek_hole": false, 00:17:27.940 "seek_data": false, 00:17:27.940 "copy": true, 00:17:27.940 "nvme_iov_md": false 00:17:27.940 }, 00:17:27.940 "memory_domains": [ 00:17:27.940 { 00:17:27.940 "dma_device_id": "system", 00:17:27.940 "dma_device_type": 1 00:17:27.940 }, 00:17:27.940 { 00:17:27.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.940 "dma_device_type": 2 00:17:27.940 } 00:17:27.940 ], 00:17:27.940 "driver_specific": {} 00:17:27.940 }' 00:17:27.940 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.940 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.940 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.940 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.940 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.940 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.940 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.940 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.201 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.202 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.202 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.202 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.202 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:28.202 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:28.202 11:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.460 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.460 "name": "BaseBdev3", 00:17:28.460 "aliases": [ 00:17:28.460 "07d7e914-e141-4c72-9438-fe8bcef8de11" 00:17:28.460 ], 00:17:28.460 "product_name": "Malloc disk", 00:17:28.460 "block_size": 512, 00:17:28.460 "num_blocks": 65536, 00:17:28.460 "uuid": "07d7e914-e141-4c72-9438-fe8bcef8de11", 00:17:28.460 "assigned_rate_limits": { 00:17:28.460 "rw_ios_per_sec": 0, 00:17:28.460 "rw_mbytes_per_sec": 0, 00:17:28.460 "r_mbytes_per_sec": 0, 00:17:28.460 "w_mbytes_per_sec": 0 00:17:28.460 }, 00:17:28.460 "claimed": true, 00:17:28.460 "claim_type": "exclusive_write", 00:17:28.460 "zoned": false, 00:17:28.460 "supported_io_types": { 00:17:28.460 "read": true, 00:17:28.460 "write": true, 00:17:28.460 "unmap": true, 00:17:28.460 "flush": true, 00:17:28.460 "reset": true, 00:17:28.460 "nvme_admin": false, 00:17:28.460 "nvme_io": false, 00:17:28.460 "nvme_io_md": false, 00:17:28.460 "write_zeroes": true, 00:17:28.460 "zcopy": true, 00:17:28.460 "get_zone_info": false, 00:17:28.460 "zone_management": false, 00:17:28.460 "zone_append": false, 00:17:28.460 "compare": false, 00:17:28.460 "compare_and_write": false, 00:17:28.460 "abort": true, 00:17:28.460 "seek_hole": false, 00:17:28.460 "seek_data": false, 00:17:28.460 "copy": true, 00:17:28.460 "nvme_iov_md": false 00:17:28.460 }, 00:17:28.460 "memory_domains": [ 00:17:28.460 { 00:17:28.460 "dma_device_id": "system", 00:17:28.460 "dma_device_type": 1 00:17:28.460 }, 00:17:28.460 { 00:17:28.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.460 "dma_device_type": 2 00:17:28.460 } 00:17:28.460 ], 00:17:28.460 "driver_specific": {} 00:17:28.460 }' 00:17:28.460 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.460 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.460 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.460 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.718 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.718 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.718 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.718 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.718 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.718 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.976 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.976 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.976 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:28.976 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:28.976 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:29.233 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:29.233 "name": "BaseBdev4", 00:17:29.233 "aliases": [ 00:17:29.233 "e4462462-da75-40b6-9914-6e1165ea7377" 00:17:29.233 ], 00:17:29.233 "product_name": "Malloc disk", 00:17:29.233 "block_size": 512, 00:17:29.233 "num_blocks": 65536, 00:17:29.233 "uuid": "e4462462-da75-40b6-9914-6e1165ea7377", 00:17:29.233 "assigned_rate_limits": { 00:17:29.233 "rw_ios_per_sec": 0, 00:17:29.233 "rw_mbytes_per_sec": 0, 00:17:29.233 "r_mbytes_per_sec": 0, 00:17:29.233 "w_mbytes_per_sec": 0 00:17:29.233 }, 00:17:29.233 "claimed": true, 00:17:29.233 "claim_type": "exclusive_write", 00:17:29.233 "zoned": false, 00:17:29.233 "supported_io_types": { 00:17:29.233 "read": true, 00:17:29.233 "write": true, 00:17:29.234 "unmap": true, 00:17:29.234 "flush": true, 00:17:29.234 "reset": true, 00:17:29.234 "nvme_admin": false, 00:17:29.234 "nvme_io": false, 00:17:29.234 "nvme_io_md": false, 00:17:29.234 "write_zeroes": true, 00:17:29.234 "zcopy": true, 00:17:29.234 "get_zone_info": false, 00:17:29.234 "zone_management": false, 00:17:29.234 "zone_append": false, 00:17:29.234 "compare": false, 00:17:29.234 "compare_and_write": false, 00:17:29.234 "abort": true, 00:17:29.234 "seek_hole": false, 00:17:29.234 "seek_data": false, 00:17:29.234 "copy": true, 00:17:29.234 "nvme_iov_md": false 00:17:29.234 }, 00:17:29.234 "memory_domains": [ 00:17:29.234 { 00:17:29.234 "dma_device_id": "system", 00:17:29.234 "dma_device_type": 1 00:17:29.234 }, 00:17:29.234 { 00:17:29.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.234 "dma_device_type": 2 00:17:29.234 } 00:17:29.234 ], 00:17:29.234 "driver_specific": {} 00:17:29.234 }' 00:17:29.234 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.234 11:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.234 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:29.234 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.234 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.492 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:29.492 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.492 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.492 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:29.492 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.492 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.492 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:29.492 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:29.750 [2024-07-25 11:28:45.525485] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.750 [2024-07-25 11:28:45.525544] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.751 [2024-07-25 11:28:45.525674] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.751 [2024-07-25 11:28:45.525764] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.751 [2024-07-25 11:28:45.525788] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 79160 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 79160 ']' 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 79160 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79160 00:17:29.751 killing process with pid 79160 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79160' 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 79160 00:17:29.751 [2024-07-25 11:28:45.568482] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.751 11:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 79160 00:17:30.315 [2024-07-25 11:28:45.928720] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.248 11:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:31.248 00:17:31.248 real 0m37.572s 00:17:31.248 user 1m9.038s 00:17:31.248 sys 0m4.767s 00:17:31.248 11:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:31.248 11:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.248 ************************************ 00:17:31.248 END TEST raid_state_function_test 00:17:31.248 ************************************ 00:17:31.506 11:28:47 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:17:31.506 11:28:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:31.506 11:28:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:31.506 11:28:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.506 ************************************ 00:17:31.506 START TEST raid_state_function_test_sb 00:17:31.506 ************************************ 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:31.506 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:31.507 Process raid pid: 80256 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=80256 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 80256' 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 80256 /var/tmp/spdk-raid.sock 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80256 ']' 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:31.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.507 11:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.507 [2024-07-25 11:28:47.297911] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:17:31.507 [2024-07-25 11:28:47.298332] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.765 [2024-07-25 11:28:47.464527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.024 [2024-07-25 11:28:47.708186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.283 [2024-07-25 11:28:47.915125] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.283 [2024-07-25 11:28:47.915472] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.541 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.541 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:32.541 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:32.800 [2024-07-25 11:28:48.495207] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.800 [2024-07-25 11:28:48.495281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.800 [2024-07-25 11:28:48.495312] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.800 [2024-07-25 11:28:48.495326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.800 [2024-07-25 11:28:48.495340] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:32.800 [2024-07-25 11:28:48.495352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:32.800 [2024-07-25 11:28:48.495364] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:32.800 [2024-07-25 11:28:48.495375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.800 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.059 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.059 "name": "Existed_Raid", 00:17:33.059 "uuid": "b38c608b-e810-4117-84fb-1aafa53f409a", 00:17:33.059 "strip_size_kb": 64, 00:17:33.059 "state": "configuring", 00:17:33.059 "raid_level": "concat", 00:17:33.059 "superblock": true, 00:17:33.059 "num_base_bdevs": 4, 00:17:33.059 "num_base_bdevs_discovered": 0, 00:17:33.059 "num_base_bdevs_operational": 4, 00:17:33.059 "base_bdevs_list": [ 00:17:33.059 { 00:17:33.059 "name": "BaseBdev1", 00:17:33.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.059 "is_configured": false, 00:17:33.059 "data_offset": 0, 00:17:33.059 "data_size": 0 00:17:33.059 }, 00:17:33.059 { 00:17:33.059 "name": "BaseBdev2", 00:17:33.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.059 "is_configured": false, 00:17:33.059 "data_offset": 0, 00:17:33.059 "data_size": 0 00:17:33.059 }, 00:17:33.059 { 00:17:33.059 "name": "BaseBdev3", 00:17:33.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.059 "is_configured": false, 00:17:33.059 "data_offset": 0, 00:17:33.059 "data_size": 0 00:17:33.059 }, 00:17:33.059 { 00:17:33.059 "name": "BaseBdev4", 00:17:33.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.059 "is_configured": false, 00:17:33.059 "data_offset": 0, 00:17:33.059 "data_size": 0 00:17:33.059 } 00:17:33.059 ] 00:17:33.059 }' 00:17:33.059 11:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.059 11:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.625 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:33.884 [2024-07-25 11:28:49.675323] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.884 [2024-07-25 11:28:49.675383] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:33.884 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:34.142 [2024-07-25 11:28:49.943414] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.142 [2024-07-25 11:28:49.943477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.142 [2024-07-25 11:28:49.943496] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.142 [2024-07-25 11:28:49.943509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.142 [2024-07-25 11:28:49.943521] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:34.142 [2024-07-25 11:28:49.943532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:34.142 [2024-07-25 11:28:49.943544] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:34.142 [2024-07-25 11:28:49.943555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:34.142 11:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:34.400 [2024-07-25 11:28:50.212273] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.400 BaseBdev1 00:17:34.400 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:34.400 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:34.400 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:34.400 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:34.400 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:34.400 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:34.400 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.659 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:34.926 [ 00:17:34.926 { 00:17:34.926 "name": "BaseBdev1", 00:17:34.926 "aliases": [ 00:17:34.926 "d663e0fa-7fca-479b-852e-08290db6c3b0" 00:17:34.926 ], 00:17:34.926 "product_name": "Malloc disk", 00:17:34.926 "block_size": 512, 00:17:34.926 "num_blocks": 65536, 00:17:34.926 "uuid": "d663e0fa-7fca-479b-852e-08290db6c3b0", 00:17:34.926 "assigned_rate_limits": { 00:17:34.926 "rw_ios_per_sec": 0, 00:17:34.926 "rw_mbytes_per_sec": 0, 00:17:34.926 "r_mbytes_per_sec": 0, 00:17:34.926 "w_mbytes_per_sec": 0 00:17:34.926 }, 00:17:34.926 "claimed": true, 00:17:34.926 "claim_type": "exclusive_write", 00:17:34.926 "zoned": false, 00:17:34.926 "supported_io_types": { 00:17:34.926 "read": true, 00:17:34.926 "write": true, 00:17:34.926 "unmap": true, 00:17:34.926 "flush": true, 00:17:34.926 "reset": true, 00:17:34.926 "nvme_admin": false, 00:17:34.926 "nvme_io": false, 00:17:34.926 "nvme_io_md": false, 00:17:34.926 "write_zeroes": true, 00:17:34.926 "zcopy": true, 00:17:34.926 "get_zone_info": false, 00:17:34.926 "zone_management": false, 00:17:34.926 "zone_append": false, 00:17:34.926 "compare": false, 00:17:34.926 "compare_and_write": false, 00:17:34.926 "abort": true, 00:17:34.926 "seek_hole": false, 00:17:34.926 "seek_data": false, 00:17:34.926 "copy": true, 00:17:34.926 "nvme_iov_md": false 00:17:34.926 }, 00:17:34.926 "memory_domains": [ 00:17:34.926 { 00:17:34.926 "dma_device_id": "system", 00:17:34.926 "dma_device_type": 1 00:17:34.926 }, 00:17:34.926 { 00:17:34.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.926 "dma_device_type": 2 00:17:34.926 } 00:17:34.926 ], 00:17:34.926 "driver_specific": {} 00:17:34.926 } 00:17:34.926 ] 00:17:34.926 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:34.926 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:34.926 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.926 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.926 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:34.926 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:34.926 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:34.926 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.927 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.927 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.927 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.927 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.927 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.186 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.186 "name": "Existed_Raid", 00:17:35.186 "uuid": "b994c365-ef78-450d-bcd3-974ba5d1fd3d", 00:17:35.186 "strip_size_kb": 64, 00:17:35.186 "state": "configuring", 00:17:35.186 "raid_level": "concat", 00:17:35.186 "superblock": true, 00:17:35.186 "num_base_bdevs": 4, 00:17:35.186 "num_base_bdevs_discovered": 1, 00:17:35.186 "num_base_bdevs_operational": 4, 00:17:35.186 "base_bdevs_list": [ 00:17:35.186 { 00:17:35.186 "name": "BaseBdev1", 00:17:35.186 "uuid": "d663e0fa-7fca-479b-852e-08290db6c3b0", 00:17:35.186 "is_configured": true, 00:17:35.186 "data_offset": 2048, 00:17:35.186 "data_size": 63488 00:17:35.186 }, 00:17:35.186 { 00:17:35.186 "name": "BaseBdev2", 00:17:35.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.186 "is_configured": false, 00:17:35.186 "data_offset": 0, 00:17:35.186 "data_size": 0 00:17:35.186 }, 00:17:35.186 { 00:17:35.186 "name": "BaseBdev3", 00:17:35.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.186 "is_configured": false, 00:17:35.186 "data_offset": 0, 00:17:35.186 "data_size": 0 00:17:35.186 }, 00:17:35.186 { 00:17:35.186 "name": "BaseBdev4", 00:17:35.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.186 "is_configured": false, 00:17:35.186 "data_offset": 0, 00:17:35.186 "data_size": 0 00:17:35.186 } 00:17:35.186 ] 00:17:35.186 }' 00:17:35.186 11:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.186 11:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.752 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:36.076 [2024-07-25 11:28:51.840799] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:36.076 [2024-07-25 11:28:51.840879] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:36.076 11:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:36.335 [2024-07-25 11:28:52.072929] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:36.335 [2024-07-25 11:28:52.075313] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:36.335 [2024-07-25 11:28:52.075368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:36.335 [2024-07-25 11:28:52.075388] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:36.335 [2024-07-25 11:28:52.075402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:36.335 [2024-07-25 11:28:52.075427] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:36.335 [2024-07-25 11:28:52.075439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.335 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.593 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.593 "name": "Existed_Raid", 00:17:36.593 "uuid": "3b8ff15a-61d3-4957-abeb-7e97957e04ba", 00:17:36.593 "strip_size_kb": 64, 00:17:36.593 "state": "configuring", 00:17:36.593 "raid_level": "concat", 00:17:36.593 "superblock": true, 00:17:36.593 "num_base_bdevs": 4, 00:17:36.593 "num_base_bdevs_discovered": 1, 00:17:36.593 "num_base_bdevs_operational": 4, 00:17:36.593 "base_bdevs_list": [ 00:17:36.593 { 00:17:36.593 "name": "BaseBdev1", 00:17:36.593 "uuid": "d663e0fa-7fca-479b-852e-08290db6c3b0", 00:17:36.593 "is_configured": true, 00:17:36.593 "data_offset": 2048, 00:17:36.593 "data_size": 63488 00:17:36.593 }, 00:17:36.593 { 00:17:36.593 "name": "BaseBdev2", 00:17:36.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.593 "is_configured": false, 00:17:36.593 "data_offset": 0, 00:17:36.593 "data_size": 0 00:17:36.593 }, 00:17:36.593 { 00:17:36.593 "name": "BaseBdev3", 00:17:36.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.594 "is_configured": false, 00:17:36.594 "data_offset": 0, 00:17:36.594 "data_size": 0 00:17:36.594 }, 00:17:36.594 { 00:17:36.594 "name": "BaseBdev4", 00:17:36.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.594 "is_configured": false, 00:17:36.594 "data_offset": 0, 00:17:36.594 "data_size": 0 00:17:36.594 } 00:17:36.594 ] 00:17:36.594 }' 00:17:36.594 11:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.594 11:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.528 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:37.528 [2024-07-25 11:28:53.395919] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.528 BaseBdev2 00:17:37.786 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:37.786 11:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:37.786 11:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:37.786 11:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:37.786 11:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:37.786 11:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:37.786 11:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:38.043 [ 00:17:38.043 { 00:17:38.043 "name": "BaseBdev2", 00:17:38.043 "aliases": [ 00:17:38.043 "1851abc7-a2b6-47d7-8e0b-26a91e2700ef" 00:17:38.043 ], 00:17:38.043 "product_name": "Malloc disk", 00:17:38.043 "block_size": 512, 00:17:38.043 "num_blocks": 65536, 00:17:38.043 "uuid": "1851abc7-a2b6-47d7-8e0b-26a91e2700ef", 00:17:38.043 "assigned_rate_limits": { 00:17:38.043 "rw_ios_per_sec": 0, 00:17:38.043 "rw_mbytes_per_sec": 0, 00:17:38.043 "r_mbytes_per_sec": 0, 00:17:38.043 "w_mbytes_per_sec": 0 00:17:38.043 }, 00:17:38.043 "claimed": true, 00:17:38.043 "claim_type": "exclusive_write", 00:17:38.043 "zoned": false, 00:17:38.043 "supported_io_types": { 00:17:38.043 "read": true, 00:17:38.043 "write": true, 00:17:38.043 "unmap": true, 00:17:38.043 "flush": true, 00:17:38.043 "reset": true, 00:17:38.043 "nvme_admin": false, 00:17:38.043 "nvme_io": false, 00:17:38.043 "nvme_io_md": false, 00:17:38.043 "write_zeroes": true, 00:17:38.043 "zcopy": true, 00:17:38.043 "get_zone_info": false, 00:17:38.043 "zone_management": false, 00:17:38.043 "zone_append": false, 00:17:38.043 "compare": false, 00:17:38.043 "compare_and_write": false, 00:17:38.043 "abort": true, 00:17:38.043 "seek_hole": false, 00:17:38.043 "seek_data": false, 00:17:38.043 "copy": true, 00:17:38.043 "nvme_iov_md": false 00:17:38.043 }, 00:17:38.043 "memory_domains": [ 00:17:38.043 { 00:17:38.043 "dma_device_id": "system", 00:17:38.043 "dma_device_type": 1 00:17:38.043 }, 00:17:38.043 { 00:17:38.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.043 "dma_device_type": 2 00:17:38.043 } 00:17:38.043 ], 00:17:38.043 "driver_specific": {} 00:17:38.043 } 00:17:38.043 ] 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.043 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.301 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.301 11:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.301 11:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.301 "name": "Existed_Raid", 00:17:38.301 "uuid": "3b8ff15a-61d3-4957-abeb-7e97957e04ba", 00:17:38.301 "strip_size_kb": 64, 00:17:38.301 "state": "configuring", 00:17:38.301 "raid_level": "concat", 00:17:38.301 "superblock": true, 00:17:38.301 "num_base_bdevs": 4, 00:17:38.301 "num_base_bdevs_discovered": 2, 00:17:38.301 "num_base_bdevs_operational": 4, 00:17:38.301 "base_bdevs_list": [ 00:17:38.301 { 00:17:38.301 "name": "BaseBdev1", 00:17:38.301 "uuid": "d663e0fa-7fca-479b-852e-08290db6c3b0", 00:17:38.301 "is_configured": true, 00:17:38.301 "data_offset": 2048, 00:17:38.301 "data_size": 63488 00:17:38.301 }, 00:17:38.301 { 00:17:38.301 "name": "BaseBdev2", 00:17:38.301 "uuid": "1851abc7-a2b6-47d7-8e0b-26a91e2700ef", 00:17:38.301 "is_configured": true, 00:17:38.301 "data_offset": 2048, 00:17:38.301 "data_size": 63488 00:17:38.301 }, 00:17:38.301 { 00:17:38.301 "name": "BaseBdev3", 00:17:38.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.301 "is_configured": false, 00:17:38.301 "data_offset": 0, 00:17:38.301 "data_size": 0 00:17:38.301 }, 00:17:38.301 { 00:17:38.301 "name": "BaseBdev4", 00:17:38.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.301 "is_configured": false, 00:17:38.301 "data_offset": 0, 00:17:38.301 "data_size": 0 00:17:38.301 } 00:17:38.301 ] 00:17:38.301 }' 00:17:38.301 11:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.301 11:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.283 11:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:39.283 [2024-07-25 11:28:55.163235] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:39.283 BaseBdev3 00:17:39.541 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:39.541 11:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:39.541 11:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:39.541 11:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:39.541 11:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:39.541 11:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:39.541 11:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.798 11:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:40.056 [ 00:17:40.057 { 00:17:40.057 "name": "BaseBdev3", 00:17:40.057 "aliases": [ 00:17:40.057 "ad4915a6-0c4f-48ed-8d60-e402191c963c" 00:17:40.057 ], 00:17:40.057 "product_name": "Malloc disk", 00:17:40.057 "block_size": 512, 00:17:40.057 "num_blocks": 65536, 00:17:40.057 "uuid": "ad4915a6-0c4f-48ed-8d60-e402191c963c", 00:17:40.057 "assigned_rate_limits": { 00:17:40.057 "rw_ios_per_sec": 0, 00:17:40.057 "rw_mbytes_per_sec": 0, 00:17:40.057 "r_mbytes_per_sec": 0, 00:17:40.057 "w_mbytes_per_sec": 0 00:17:40.057 }, 00:17:40.057 "claimed": true, 00:17:40.057 "claim_type": "exclusive_write", 00:17:40.057 "zoned": false, 00:17:40.057 "supported_io_types": { 00:17:40.057 "read": true, 00:17:40.057 "write": true, 00:17:40.057 "unmap": true, 00:17:40.057 "flush": true, 00:17:40.057 "reset": true, 00:17:40.057 "nvme_admin": false, 00:17:40.057 "nvme_io": false, 00:17:40.057 "nvme_io_md": false, 00:17:40.057 "write_zeroes": true, 00:17:40.057 "zcopy": true, 00:17:40.057 "get_zone_info": false, 00:17:40.057 "zone_management": false, 00:17:40.057 "zone_append": false, 00:17:40.057 "compare": false, 00:17:40.057 "compare_and_write": false, 00:17:40.057 "abort": true, 00:17:40.057 "seek_hole": false, 00:17:40.057 "seek_data": false, 00:17:40.057 "copy": true, 00:17:40.057 "nvme_iov_md": false 00:17:40.057 }, 00:17:40.057 "memory_domains": [ 00:17:40.057 { 00:17:40.057 "dma_device_id": "system", 00:17:40.057 "dma_device_type": 1 00:17:40.057 }, 00:17:40.057 { 00:17:40.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.057 "dma_device_type": 2 00:17:40.057 } 00:17:40.057 ], 00:17:40.057 "driver_specific": {} 00:17:40.057 } 00:17:40.057 ] 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.057 11:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.315 11:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.315 "name": "Existed_Raid", 00:17:40.315 "uuid": "3b8ff15a-61d3-4957-abeb-7e97957e04ba", 00:17:40.315 "strip_size_kb": 64, 00:17:40.315 "state": "configuring", 00:17:40.315 "raid_level": "concat", 00:17:40.315 "superblock": true, 00:17:40.315 "num_base_bdevs": 4, 00:17:40.315 "num_base_bdevs_discovered": 3, 00:17:40.315 "num_base_bdevs_operational": 4, 00:17:40.315 "base_bdevs_list": [ 00:17:40.315 { 00:17:40.315 "name": "BaseBdev1", 00:17:40.315 "uuid": "d663e0fa-7fca-479b-852e-08290db6c3b0", 00:17:40.315 "is_configured": true, 00:17:40.315 "data_offset": 2048, 00:17:40.315 "data_size": 63488 00:17:40.315 }, 00:17:40.315 { 00:17:40.315 "name": "BaseBdev2", 00:17:40.315 "uuid": "1851abc7-a2b6-47d7-8e0b-26a91e2700ef", 00:17:40.315 "is_configured": true, 00:17:40.315 "data_offset": 2048, 00:17:40.315 "data_size": 63488 00:17:40.315 }, 00:17:40.315 { 00:17:40.315 "name": "BaseBdev3", 00:17:40.315 "uuid": "ad4915a6-0c4f-48ed-8d60-e402191c963c", 00:17:40.315 "is_configured": true, 00:17:40.315 "data_offset": 2048, 00:17:40.315 "data_size": 63488 00:17:40.315 }, 00:17:40.315 { 00:17:40.315 "name": "BaseBdev4", 00:17:40.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.315 "is_configured": false, 00:17:40.315 "data_offset": 0, 00:17:40.315 "data_size": 0 00:17:40.315 } 00:17:40.315 ] 00:17:40.315 }' 00:17:40.315 11:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.315 11:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.881 11:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:41.141 [2024-07-25 11:28:56.918007] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:41.141 [2024-07-25 11:28:56.918578] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:41.141 [2024-07-25 11:28:56.918761] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:41.141 [2024-07-25 11:28:56.919148] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:41.141 [2024-07-25 11:28:56.919490] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:41.141 [2024-07-25 11:28:56.919641] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev4 00:17:41.141 id_bdev 0x617000007e80 00:17:41.141 [2024-07-25 11:28:56.919949] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.141 11:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:41.141 11:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:41.141 11:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:41.141 11:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:41.141 11:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:41.141 11:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:41.141 11:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:41.405 11:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:41.665 [ 00:17:41.665 { 00:17:41.665 "name": "BaseBdev4", 00:17:41.665 "aliases": [ 00:17:41.665 "704f5d09-2e99-43eb-8c12-4f01b07d9d6a" 00:17:41.665 ], 00:17:41.665 "product_name": "Malloc disk", 00:17:41.665 "block_size": 512, 00:17:41.665 "num_blocks": 65536, 00:17:41.665 "uuid": "704f5d09-2e99-43eb-8c12-4f01b07d9d6a", 00:17:41.665 "assigned_rate_limits": { 00:17:41.665 "rw_ios_per_sec": 0, 00:17:41.665 "rw_mbytes_per_sec": 0, 00:17:41.665 "r_mbytes_per_sec": 0, 00:17:41.665 "w_mbytes_per_sec": 0 00:17:41.665 }, 00:17:41.665 "claimed": true, 00:17:41.665 "claim_type": "exclusive_write", 00:17:41.665 "zoned": false, 00:17:41.665 "supported_io_types": { 00:17:41.665 "read": true, 00:17:41.665 "write": true, 00:17:41.665 "unmap": true, 00:17:41.665 "flush": true, 00:17:41.665 "reset": true, 00:17:41.665 "nvme_admin": false, 00:17:41.665 "nvme_io": false, 00:17:41.665 "nvme_io_md": false, 00:17:41.665 "write_zeroes": true, 00:17:41.665 "zcopy": true, 00:17:41.665 "get_zone_info": false, 00:17:41.665 "zone_management": false, 00:17:41.665 "zone_append": false, 00:17:41.665 "compare": false, 00:17:41.665 "compare_and_write": false, 00:17:41.665 "abort": true, 00:17:41.665 "seek_hole": false, 00:17:41.665 "seek_data": false, 00:17:41.665 "copy": true, 00:17:41.665 "nvme_iov_md": false 00:17:41.665 }, 00:17:41.665 "memory_domains": [ 00:17:41.665 { 00:17:41.665 "dma_device_id": "system", 00:17:41.665 "dma_device_type": 1 00:17:41.665 }, 00:17:41.665 { 00:17:41.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.665 "dma_device_type": 2 00:17:41.665 } 00:17:41.665 ], 00:17:41.665 "driver_specific": {} 00:17:41.665 } 00:17:41.665 ] 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.665 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.924 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.924 "name": "Existed_Raid", 00:17:41.924 "uuid": "3b8ff15a-61d3-4957-abeb-7e97957e04ba", 00:17:41.924 "strip_size_kb": 64, 00:17:41.924 "state": "online", 00:17:41.924 "raid_level": "concat", 00:17:41.924 "superblock": true, 00:17:41.924 "num_base_bdevs": 4, 00:17:41.924 "num_base_bdevs_discovered": 4, 00:17:41.924 "num_base_bdevs_operational": 4, 00:17:41.924 "base_bdevs_list": [ 00:17:41.924 { 00:17:41.924 "name": "BaseBdev1", 00:17:41.924 "uuid": "d663e0fa-7fca-479b-852e-08290db6c3b0", 00:17:41.924 "is_configured": true, 00:17:41.924 "data_offset": 2048, 00:17:41.924 "data_size": 63488 00:17:41.924 }, 00:17:41.924 { 00:17:41.924 "name": "BaseBdev2", 00:17:41.924 "uuid": "1851abc7-a2b6-47d7-8e0b-26a91e2700ef", 00:17:41.924 "is_configured": true, 00:17:41.924 "data_offset": 2048, 00:17:41.924 "data_size": 63488 00:17:41.924 }, 00:17:41.924 { 00:17:41.924 "name": "BaseBdev3", 00:17:41.924 "uuid": "ad4915a6-0c4f-48ed-8d60-e402191c963c", 00:17:41.924 "is_configured": true, 00:17:41.924 "data_offset": 2048, 00:17:41.924 "data_size": 63488 00:17:41.924 }, 00:17:41.924 { 00:17:41.924 "name": "BaseBdev4", 00:17:41.924 "uuid": "704f5d09-2e99-43eb-8c12-4f01b07d9d6a", 00:17:41.924 "is_configured": true, 00:17:41.924 "data_offset": 2048, 00:17:41.924 "data_size": 63488 00:17:41.924 } 00:17:41.924 ] 00:17:41.924 }' 00:17:41.924 11:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.924 11:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:42.860 [2024-07-25 11:28:58.642959] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:42.860 "name": "Existed_Raid", 00:17:42.860 "aliases": [ 00:17:42.860 "3b8ff15a-61d3-4957-abeb-7e97957e04ba" 00:17:42.860 ], 00:17:42.860 "product_name": "Raid Volume", 00:17:42.860 "block_size": 512, 00:17:42.860 "num_blocks": 253952, 00:17:42.860 "uuid": "3b8ff15a-61d3-4957-abeb-7e97957e04ba", 00:17:42.860 "assigned_rate_limits": { 00:17:42.860 "rw_ios_per_sec": 0, 00:17:42.860 "rw_mbytes_per_sec": 0, 00:17:42.860 "r_mbytes_per_sec": 0, 00:17:42.860 "w_mbytes_per_sec": 0 00:17:42.860 }, 00:17:42.860 "claimed": false, 00:17:42.860 "zoned": false, 00:17:42.860 "supported_io_types": { 00:17:42.860 "read": true, 00:17:42.860 "write": true, 00:17:42.860 "unmap": true, 00:17:42.860 "flush": true, 00:17:42.860 "reset": true, 00:17:42.860 "nvme_admin": false, 00:17:42.860 "nvme_io": false, 00:17:42.860 "nvme_io_md": false, 00:17:42.860 "write_zeroes": true, 00:17:42.860 "zcopy": false, 00:17:42.860 "get_zone_info": false, 00:17:42.860 "zone_management": false, 00:17:42.860 "zone_append": false, 00:17:42.860 "compare": false, 00:17:42.860 "compare_and_write": false, 00:17:42.860 "abort": false, 00:17:42.860 "seek_hole": false, 00:17:42.860 "seek_data": false, 00:17:42.860 "copy": false, 00:17:42.860 "nvme_iov_md": false 00:17:42.860 }, 00:17:42.860 "memory_domains": [ 00:17:42.860 { 00:17:42.860 "dma_device_id": "system", 00:17:42.860 "dma_device_type": 1 00:17:42.860 }, 00:17:42.860 { 00:17:42.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.860 "dma_device_type": 2 00:17:42.860 }, 00:17:42.860 { 00:17:42.860 "dma_device_id": "system", 00:17:42.860 "dma_device_type": 1 00:17:42.860 }, 00:17:42.860 { 00:17:42.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.860 "dma_device_type": 2 00:17:42.860 }, 00:17:42.860 { 00:17:42.860 "dma_device_id": "system", 00:17:42.860 "dma_device_type": 1 00:17:42.860 }, 00:17:42.860 { 00:17:42.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.860 "dma_device_type": 2 00:17:42.860 }, 00:17:42.860 { 00:17:42.860 "dma_device_id": "system", 00:17:42.860 "dma_device_type": 1 00:17:42.860 }, 00:17:42.860 { 00:17:42.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.860 "dma_device_type": 2 00:17:42.860 } 00:17:42.860 ], 00:17:42.860 "driver_specific": { 00:17:42.860 "raid": { 00:17:42.860 "uuid": "3b8ff15a-61d3-4957-abeb-7e97957e04ba", 00:17:42.860 "strip_size_kb": 64, 00:17:42.860 "state": "online", 00:17:42.860 "raid_level": "concat", 00:17:42.860 "superblock": true, 00:17:42.860 "num_base_bdevs": 4, 00:17:42.860 "num_base_bdevs_discovered": 4, 00:17:42.860 "num_base_bdevs_operational": 4, 00:17:42.860 "base_bdevs_list": [ 00:17:42.860 { 00:17:42.860 "name": "BaseBdev1", 00:17:42.860 "uuid": "d663e0fa-7fca-479b-852e-08290db6c3b0", 00:17:42.860 "is_configured": true, 00:17:42.860 "data_offset": 2048, 00:17:42.860 "data_size": 63488 00:17:42.860 }, 00:17:42.860 { 00:17:42.860 "name": "BaseBdev2", 00:17:42.860 "uuid": "1851abc7-a2b6-47d7-8e0b-26a91e2700ef", 00:17:42.860 "is_configured": true, 00:17:42.860 "data_offset": 2048, 00:17:42.860 "data_size": 63488 00:17:42.860 }, 00:17:42.860 { 00:17:42.860 "name": "BaseBdev3", 00:17:42.860 "uuid": "ad4915a6-0c4f-48ed-8d60-e402191c963c", 00:17:42.860 "is_configured": true, 00:17:42.860 "data_offset": 2048, 00:17:42.860 "data_size": 63488 00:17:42.860 }, 00:17:42.860 { 00:17:42.860 "name": "BaseBdev4", 00:17:42.860 "uuid": "704f5d09-2e99-43eb-8c12-4f01b07d9d6a", 00:17:42.860 "is_configured": true, 00:17:42.860 "data_offset": 2048, 00:17:42.860 "data_size": 63488 00:17:42.860 } 00:17:42.860 ] 00:17:42.860 } 00:17:42.860 } 00:17:42.860 }' 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:42.860 BaseBdev2 00:17:42.860 BaseBdev3 00:17:42.860 BaseBdev4' 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:42.860 11:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:43.428 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:43.428 "name": "BaseBdev1", 00:17:43.428 "aliases": [ 00:17:43.428 "d663e0fa-7fca-479b-852e-08290db6c3b0" 00:17:43.428 ], 00:17:43.428 "product_name": "Malloc disk", 00:17:43.428 "block_size": 512, 00:17:43.428 "num_blocks": 65536, 00:17:43.428 "uuid": "d663e0fa-7fca-479b-852e-08290db6c3b0", 00:17:43.428 "assigned_rate_limits": { 00:17:43.428 "rw_ios_per_sec": 0, 00:17:43.428 "rw_mbytes_per_sec": 0, 00:17:43.428 "r_mbytes_per_sec": 0, 00:17:43.428 "w_mbytes_per_sec": 0 00:17:43.428 }, 00:17:43.428 "claimed": true, 00:17:43.428 "claim_type": "exclusive_write", 00:17:43.428 "zoned": false, 00:17:43.428 "supported_io_types": { 00:17:43.428 "read": true, 00:17:43.428 "write": true, 00:17:43.428 "unmap": true, 00:17:43.428 "flush": true, 00:17:43.428 "reset": true, 00:17:43.428 "nvme_admin": false, 00:17:43.428 "nvme_io": false, 00:17:43.428 "nvme_io_md": false, 00:17:43.428 "write_zeroes": true, 00:17:43.428 "zcopy": true, 00:17:43.428 "get_zone_info": false, 00:17:43.428 "zone_management": false, 00:17:43.428 "zone_append": false, 00:17:43.428 "compare": false, 00:17:43.428 "compare_and_write": false, 00:17:43.428 "abort": true, 00:17:43.428 "seek_hole": false, 00:17:43.428 "seek_data": false, 00:17:43.428 "copy": true, 00:17:43.428 "nvme_iov_md": false 00:17:43.428 }, 00:17:43.428 "memory_domains": [ 00:17:43.428 { 00:17:43.428 "dma_device_id": "system", 00:17:43.428 "dma_device_type": 1 00:17:43.428 }, 00:17:43.428 { 00:17:43.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.428 "dma_device_type": 2 00:17:43.428 } 00:17:43.428 ], 00:17:43.428 "driver_specific": {} 00:17:43.428 }' 00:17:43.428 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.428 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.428 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:43.428 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.428 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.428 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:43.428 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.428 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.687 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:43.687 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.687 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.687 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:43.687 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:43.687 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:43.687 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:43.945 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:43.945 "name": "BaseBdev2", 00:17:43.945 "aliases": [ 00:17:43.945 "1851abc7-a2b6-47d7-8e0b-26a91e2700ef" 00:17:43.945 ], 00:17:43.945 "product_name": "Malloc disk", 00:17:43.945 "block_size": 512, 00:17:43.945 "num_blocks": 65536, 00:17:43.945 "uuid": "1851abc7-a2b6-47d7-8e0b-26a91e2700ef", 00:17:43.945 "assigned_rate_limits": { 00:17:43.945 "rw_ios_per_sec": 0, 00:17:43.945 "rw_mbytes_per_sec": 0, 00:17:43.945 "r_mbytes_per_sec": 0, 00:17:43.945 "w_mbytes_per_sec": 0 00:17:43.945 }, 00:17:43.945 "claimed": true, 00:17:43.945 "claim_type": "exclusive_write", 00:17:43.945 "zoned": false, 00:17:43.945 "supported_io_types": { 00:17:43.945 "read": true, 00:17:43.945 "write": true, 00:17:43.945 "unmap": true, 00:17:43.945 "flush": true, 00:17:43.945 "reset": true, 00:17:43.945 "nvme_admin": false, 00:17:43.945 "nvme_io": false, 00:17:43.945 "nvme_io_md": false, 00:17:43.945 "write_zeroes": true, 00:17:43.945 "zcopy": true, 00:17:43.945 "get_zone_info": false, 00:17:43.945 "zone_management": false, 00:17:43.945 "zone_append": false, 00:17:43.945 "compare": false, 00:17:43.945 "compare_and_write": false, 00:17:43.945 "abort": true, 00:17:43.945 "seek_hole": false, 00:17:43.945 "seek_data": false, 00:17:43.945 "copy": true, 00:17:43.945 "nvme_iov_md": false 00:17:43.945 }, 00:17:43.945 "memory_domains": [ 00:17:43.945 { 00:17:43.945 "dma_device_id": "system", 00:17:43.945 "dma_device_type": 1 00:17:43.945 }, 00:17:43.945 { 00:17:43.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.945 "dma_device_type": 2 00:17:43.945 } 00:17:43.945 ], 00:17:43.945 "driver_specific": {} 00:17:43.945 }' 00:17:43.945 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.945 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.203 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:44.203 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.203 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.203 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:44.203 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.203 11:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.203 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:44.203 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.461 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.462 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:44.462 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:44.462 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:44.462 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:44.720 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:44.720 "name": "BaseBdev3", 00:17:44.720 "aliases": [ 00:17:44.720 "ad4915a6-0c4f-48ed-8d60-e402191c963c" 00:17:44.720 ], 00:17:44.720 "product_name": "Malloc disk", 00:17:44.720 "block_size": 512, 00:17:44.720 "num_blocks": 65536, 00:17:44.720 "uuid": "ad4915a6-0c4f-48ed-8d60-e402191c963c", 00:17:44.720 "assigned_rate_limits": { 00:17:44.720 "rw_ios_per_sec": 0, 00:17:44.720 "rw_mbytes_per_sec": 0, 00:17:44.720 "r_mbytes_per_sec": 0, 00:17:44.720 "w_mbytes_per_sec": 0 00:17:44.720 }, 00:17:44.720 "claimed": true, 00:17:44.720 "claim_type": "exclusive_write", 00:17:44.720 "zoned": false, 00:17:44.720 "supported_io_types": { 00:17:44.720 "read": true, 00:17:44.720 "write": true, 00:17:44.720 "unmap": true, 00:17:44.720 "flush": true, 00:17:44.720 "reset": true, 00:17:44.720 "nvme_admin": false, 00:17:44.720 "nvme_io": false, 00:17:44.720 "nvme_io_md": false, 00:17:44.720 "write_zeroes": true, 00:17:44.720 "zcopy": true, 00:17:44.720 "get_zone_info": false, 00:17:44.720 "zone_management": false, 00:17:44.720 "zone_append": false, 00:17:44.720 "compare": false, 00:17:44.720 "compare_and_write": false, 00:17:44.720 "abort": true, 00:17:44.720 "seek_hole": false, 00:17:44.720 "seek_data": false, 00:17:44.720 "copy": true, 00:17:44.720 "nvme_iov_md": false 00:17:44.720 }, 00:17:44.720 "memory_domains": [ 00:17:44.720 { 00:17:44.720 "dma_device_id": "system", 00:17:44.720 "dma_device_type": 1 00:17:44.720 }, 00:17:44.720 { 00:17:44.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.720 "dma_device_type": 2 00:17:44.720 } 00:17:44.720 ], 00:17:44.720 "driver_specific": {} 00:17:44.720 }' 00:17:44.720 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.720 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:44.720 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:44.720 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.720 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:44.720 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:44.720 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.978 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.978 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:44.979 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.979 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.979 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:44.979 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:44.979 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:44.979 11:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:45.237 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:45.237 "name": "BaseBdev4", 00:17:45.237 "aliases": [ 00:17:45.237 "704f5d09-2e99-43eb-8c12-4f01b07d9d6a" 00:17:45.237 ], 00:17:45.237 "product_name": "Malloc disk", 00:17:45.237 "block_size": 512, 00:17:45.237 "num_blocks": 65536, 00:17:45.237 "uuid": "704f5d09-2e99-43eb-8c12-4f01b07d9d6a", 00:17:45.237 "assigned_rate_limits": { 00:17:45.237 "rw_ios_per_sec": 0, 00:17:45.237 "rw_mbytes_per_sec": 0, 00:17:45.237 "r_mbytes_per_sec": 0, 00:17:45.237 "w_mbytes_per_sec": 0 00:17:45.237 }, 00:17:45.237 "claimed": true, 00:17:45.237 "claim_type": "exclusive_write", 00:17:45.237 "zoned": false, 00:17:45.237 "supported_io_types": { 00:17:45.237 "read": true, 00:17:45.237 "write": true, 00:17:45.237 "unmap": true, 00:17:45.237 "flush": true, 00:17:45.237 "reset": true, 00:17:45.237 "nvme_admin": false, 00:17:45.237 "nvme_io": false, 00:17:45.237 "nvme_io_md": false, 00:17:45.237 "write_zeroes": true, 00:17:45.237 "zcopy": true, 00:17:45.237 "get_zone_info": false, 00:17:45.237 "zone_management": false, 00:17:45.237 "zone_append": false, 00:17:45.237 "compare": false, 00:17:45.237 "compare_and_write": false, 00:17:45.237 "abort": true, 00:17:45.237 "seek_hole": false, 00:17:45.237 "seek_data": false, 00:17:45.237 "copy": true, 00:17:45.237 "nvme_iov_md": false 00:17:45.237 }, 00:17:45.237 "memory_domains": [ 00:17:45.237 { 00:17:45.237 "dma_device_id": "system", 00:17:45.237 "dma_device_type": 1 00:17:45.237 }, 00:17:45.237 { 00:17:45.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.237 "dma_device_type": 2 00:17:45.237 } 00:17:45.237 ], 00:17:45.237 "driver_specific": {} 00:17:45.237 }' 00:17:45.237 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.237 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:45.496 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:45.496 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.496 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:45.496 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:45.496 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.496 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:45.496 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:45.496 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.753 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:45.753 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:45.753 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:46.011 [2024-07-25 11:29:01.695536] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:46.011 [2024-07-25 11:29:01.695580] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.011 [2024-07-25 11:29:01.695699] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.011 11:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.268 11:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.268 "name": "Existed_Raid", 00:17:46.268 "uuid": "3b8ff15a-61d3-4957-abeb-7e97957e04ba", 00:17:46.268 "strip_size_kb": 64, 00:17:46.268 "state": "offline", 00:17:46.268 "raid_level": "concat", 00:17:46.268 "superblock": true, 00:17:46.268 "num_base_bdevs": 4, 00:17:46.268 "num_base_bdevs_discovered": 3, 00:17:46.268 "num_base_bdevs_operational": 3, 00:17:46.268 "base_bdevs_list": [ 00:17:46.268 { 00:17:46.268 "name": null, 00:17:46.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.268 "is_configured": false, 00:17:46.268 "data_offset": 2048, 00:17:46.268 "data_size": 63488 00:17:46.268 }, 00:17:46.268 { 00:17:46.268 "name": "BaseBdev2", 00:17:46.268 "uuid": "1851abc7-a2b6-47d7-8e0b-26a91e2700ef", 00:17:46.268 "is_configured": true, 00:17:46.268 "data_offset": 2048, 00:17:46.268 "data_size": 63488 00:17:46.268 }, 00:17:46.268 { 00:17:46.268 "name": "BaseBdev3", 00:17:46.268 "uuid": "ad4915a6-0c4f-48ed-8d60-e402191c963c", 00:17:46.268 "is_configured": true, 00:17:46.268 "data_offset": 2048, 00:17:46.268 "data_size": 63488 00:17:46.268 }, 00:17:46.268 { 00:17:46.268 "name": "BaseBdev4", 00:17:46.268 "uuid": "704f5d09-2e99-43eb-8c12-4f01b07d9d6a", 00:17:46.268 "is_configured": true, 00:17:46.268 "data_offset": 2048, 00:17:46.268 "data_size": 63488 00:17:46.268 } 00:17:46.268 ] 00:17:46.268 }' 00:17:46.268 11:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.268 11:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.202 11:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:47.202 11:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:47.202 11:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.202 11:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:47.460 11:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:47.460 11:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.460 11:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:47.718 [2024-07-25 11:29:03.353928] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:47.718 11:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:47.718 11:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:47.719 11:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.719 11:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:47.976 11:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:47.976 11:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.976 11:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:48.235 [2024-07-25 11:29:04.009827] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:48.493 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:48.493 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:48.493 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.493 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:48.493 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:48.493 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.493 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:49.059 [2024-07-25 11:29:04.651710] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:49.059 [2024-07-25 11:29:04.651785] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:49.059 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:49.059 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:49.059 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.059 11:29:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:49.317 11:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:49.317 11:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:49.317 11:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:49.317 11:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:49.317 11:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:49.317 11:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:49.575 BaseBdev2 00:17:49.575 11:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:49.575 11:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:49.575 11:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:49.575 11:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:49.575 11:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:49.575 11:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:49.575 11:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:49.833 11:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:50.124 [ 00:17:50.124 { 00:17:50.124 "name": "BaseBdev2", 00:17:50.124 "aliases": [ 00:17:50.124 "e13564f0-3369-482a-acde-681d7fc051ed" 00:17:50.124 ], 00:17:50.124 "product_name": "Malloc disk", 00:17:50.124 "block_size": 512, 00:17:50.124 "num_blocks": 65536, 00:17:50.124 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:17:50.125 "assigned_rate_limits": { 00:17:50.125 "rw_ios_per_sec": 0, 00:17:50.125 "rw_mbytes_per_sec": 0, 00:17:50.125 "r_mbytes_per_sec": 0, 00:17:50.125 "w_mbytes_per_sec": 0 00:17:50.125 }, 00:17:50.125 "claimed": false, 00:17:50.125 "zoned": false, 00:17:50.125 "supported_io_types": { 00:17:50.125 "read": true, 00:17:50.125 "write": true, 00:17:50.125 "unmap": true, 00:17:50.125 "flush": true, 00:17:50.125 "reset": true, 00:17:50.125 "nvme_admin": false, 00:17:50.125 "nvme_io": false, 00:17:50.125 "nvme_io_md": false, 00:17:50.125 "write_zeroes": true, 00:17:50.125 "zcopy": true, 00:17:50.125 "get_zone_info": false, 00:17:50.125 "zone_management": false, 00:17:50.125 "zone_append": false, 00:17:50.125 "compare": false, 00:17:50.125 "compare_and_write": false, 00:17:50.125 "abort": true, 00:17:50.125 "seek_hole": false, 00:17:50.125 "seek_data": false, 00:17:50.125 "copy": true, 00:17:50.125 "nvme_iov_md": false 00:17:50.125 }, 00:17:50.125 "memory_domains": [ 00:17:50.125 { 00:17:50.125 "dma_device_id": "system", 00:17:50.125 "dma_device_type": 1 00:17:50.125 }, 00:17:50.125 { 00:17:50.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.125 "dma_device_type": 2 00:17:50.125 } 00:17:50.125 ], 00:17:50.125 "driver_specific": {} 00:17:50.125 } 00:17:50.125 ] 00:17:50.125 11:29:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:50.125 11:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:50.125 11:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:50.125 11:29:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:50.382 BaseBdev3 00:17:50.382 11:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:50.382 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:50.382 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:50.382 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:50.382 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:50.382 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:50.382 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.640 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:50.898 [ 00:17:50.898 { 00:17:50.898 "name": "BaseBdev3", 00:17:50.898 "aliases": [ 00:17:50.898 "5612c8a0-1027-4b5a-8913-19a377ffab9c" 00:17:50.898 ], 00:17:50.898 "product_name": "Malloc disk", 00:17:50.898 "block_size": 512, 00:17:50.898 "num_blocks": 65536, 00:17:50.898 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:17:50.898 "assigned_rate_limits": { 00:17:50.898 "rw_ios_per_sec": 0, 00:17:50.898 "rw_mbytes_per_sec": 0, 00:17:50.898 "r_mbytes_per_sec": 0, 00:17:50.898 "w_mbytes_per_sec": 0 00:17:50.898 }, 00:17:50.898 "claimed": false, 00:17:50.898 "zoned": false, 00:17:50.898 "supported_io_types": { 00:17:50.898 "read": true, 00:17:50.898 "write": true, 00:17:50.898 "unmap": true, 00:17:50.898 "flush": true, 00:17:50.898 "reset": true, 00:17:50.898 "nvme_admin": false, 00:17:50.898 "nvme_io": false, 00:17:50.898 "nvme_io_md": false, 00:17:50.898 "write_zeroes": true, 00:17:50.898 "zcopy": true, 00:17:50.898 "get_zone_info": false, 00:17:50.898 "zone_management": false, 00:17:50.898 "zone_append": false, 00:17:50.898 "compare": false, 00:17:50.898 "compare_and_write": false, 00:17:50.898 "abort": true, 00:17:50.898 "seek_hole": false, 00:17:50.898 "seek_data": false, 00:17:50.898 "copy": true, 00:17:50.898 "nvme_iov_md": false 00:17:50.898 }, 00:17:50.898 "memory_domains": [ 00:17:50.898 { 00:17:50.898 "dma_device_id": "system", 00:17:50.898 "dma_device_type": 1 00:17:50.898 }, 00:17:50.898 { 00:17:50.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.898 "dma_device_type": 2 00:17:50.898 } 00:17:50.899 ], 00:17:50.899 "driver_specific": {} 00:17:50.899 } 00:17:50.899 ] 00:17:50.899 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:50.899 11:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:50.899 11:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:50.899 11:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:51.183 BaseBdev4 00:17:51.183 11:29:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:51.183 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:51.183 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:51.183 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:51.183 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:51.183 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:51.183 11:29:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:51.441 11:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:51.698 [ 00:17:51.698 { 00:17:51.698 "name": "BaseBdev4", 00:17:51.698 "aliases": [ 00:17:51.698 "8607c9db-ec4e-4cb4-b427-6d250382105e" 00:17:51.698 ], 00:17:51.698 "product_name": "Malloc disk", 00:17:51.698 "block_size": 512, 00:17:51.698 "num_blocks": 65536, 00:17:51.698 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:17:51.698 "assigned_rate_limits": { 00:17:51.698 "rw_ios_per_sec": 0, 00:17:51.698 "rw_mbytes_per_sec": 0, 00:17:51.698 "r_mbytes_per_sec": 0, 00:17:51.698 "w_mbytes_per_sec": 0 00:17:51.698 }, 00:17:51.698 "claimed": false, 00:17:51.698 "zoned": false, 00:17:51.698 "supported_io_types": { 00:17:51.698 "read": true, 00:17:51.699 "write": true, 00:17:51.699 "unmap": true, 00:17:51.699 "flush": true, 00:17:51.699 "reset": true, 00:17:51.699 "nvme_admin": false, 00:17:51.699 "nvme_io": false, 00:17:51.699 "nvme_io_md": false, 00:17:51.699 "write_zeroes": true, 00:17:51.699 "zcopy": true, 00:17:51.699 "get_zone_info": false, 00:17:51.699 "zone_management": false, 00:17:51.699 "zone_append": false, 00:17:51.699 "compare": false, 00:17:51.699 "compare_and_write": false, 00:17:51.699 "abort": true, 00:17:51.699 "seek_hole": false, 00:17:51.699 "seek_data": false, 00:17:51.699 "copy": true, 00:17:51.699 "nvme_iov_md": false 00:17:51.699 }, 00:17:51.699 "memory_domains": [ 00:17:51.699 { 00:17:51.699 "dma_device_id": "system", 00:17:51.699 "dma_device_type": 1 00:17:51.699 }, 00:17:51.699 { 00:17:51.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.699 "dma_device_type": 2 00:17:51.699 } 00:17:51.699 ], 00:17:51.699 "driver_specific": {} 00:17:51.699 } 00:17:51.699 ] 00:17:51.699 11:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:51.699 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:51.699 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:51.699 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:51.957 [2024-07-25 11:29:07.603421] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.957 [2024-07-25 11:29:07.603506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.957 [2024-07-25 11:29:07.603544] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.957 [2024-07-25 11:29:07.605959] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.957 [2024-07-25 11:29:07.606041] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.957 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.215 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.215 "name": "Existed_Raid", 00:17:52.215 "uuid": "3f3118cf-705d-4a19-9baa-5b4d9683235a", 00:17:52.215 "strip_size_kb": 64, 00:17:52.215 "state": "configuring", 00:17:52.215 "raid_level": "concat", 00:17:52.215 "superblock": true, 00:17:52.215 "num_base_bdevs": 4, 00:17:52.215 "num_base_bdevs_discovered": 3, 00:17:52.215 "num_base_bdevs_operational": 4, 00:17:52.215 "base_bdevs_list": [ 00:17:52.215 { 00:17:52.215 "name": "BaseBdev1", 00:17:52.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.215 "is_configured": false, 00:17:52.215 "data_offset": 0, 00:17:52.215 "data_size": 0 00:17:52.215 }, 00:17:52.215 { 00:17:52.215 "name": "BaseBdev2", 00:17:52.215 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:17:52.215 "is_configured": true, 00:17:52.215 "data_offset": 2048, 00:17:52.215 "data_size": 63488 00:17:52.215 }, 00:17:52.215 { 00:17:52.215 "name": "BaseBdev3", 00:17:52.215 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:17:52.215 "is_configured": true, 00:17:52.215 "data_offset": 2048, 00:17:52.215 "data_size": 63488 00:17:52.215 }, 00:17:52.215 { 00:17:52.215 "name": "BaseBdev4", 00:17:52.215 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:17:52.215 "is_configured": true, 00:17:52.215 "data_offset": 2048, 00:17:52.215 "data_size": 63488 00:17:52.215 } 00:17:52.215 ] 00:17:52.215 }' 00:17:52.215 11:29:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.215 11:29:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.785 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:53.044 [2024-07-25 11:29:08.727663] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.044 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.303 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.303 "name": "Existed_Raid", 00:17:53.303 "uuid": "3f3118cf-705d-4a19-9baa-5b4d9683235a", 00:17:53.303 "strip_size_kb": 64, 00:17:53.303 "state": "configuring", 00:17:53.303 "raid_level": "concat", 00:17:53.303 "superblock": true, 00:17:53.303 "num_base_bdevs": 4, 00:17:53.303 "num_base_bdevs_discovered": 2, 00:17:53.303 "num_base_bdevs_operational": 4, 00:17:53.303 "base_bdevs_list": [ 00:17:53.303 { 00:17:53.303 "name": "BaseBdev1", 00:17:53.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.303 "is_configured": false, 00:17:53.303 "data_offset": 0, 00:17:53.303 "data_size": 0 00:17:53.303 }, 00:17:53.303 { 00:17:53.303 "name": null, 00:17:53.303 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:17:53.303 "is_configured": false, 00:17:53.303 "data_offset": 2048, 00:17:53.303 "data_size": 63488 00:17:53.303 }, 00:17:53.303 { 00:17:53.303 "name": "BaseBdev3", 00:17:53.303 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:17:53.303 "is_configured": true, 00:17:53.303 "data_offset": 2048, 00:17:53.303 "data_size": 63488 00:17:53.303 }, 00:17:53.303 { 00:17:53.303 "name": "BaseBdev4", 00:17:53.303 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:17:53.303 "is_configured": true, 00:17:53.303 "data_offset": 2048, 00:17:53.303 "data_size": 63488 00:17:53.303 } 00:17:53.303 ] 00:17:53.303 }' 00:17:53.303 11:29:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.303 11:29:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.868 11:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.868 11:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:54.126 11:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:54.126 11:29:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:54.384 [2024-07-25 11:29:10.139949] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.384 BaseBdev1 00:17:54.384 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:54.384 11:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:54.384 11:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:54.384 11:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:54.384 11:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:54.384 11:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:54.384 11:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.642 11:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:54.901 [ 00:17:54.901 { 00:17:54.901 "name": "BaseBdev1", 00:17:54.901 "aliases": [ 00:17:54.901 "ccceb353-0e38-4a0f-8d54-4f0b57cf8981" 00:17:54.901 ], 00:17:54.901 "product_name": "Malloc disk", 00:17:54.901 "block_size": 512, 00:17:54.901 "num_blocks": 65536, 00:17:54.901 "uuid": "ccceb353-0e38-4a0f-8d54-4f0b57cf8981", 00:17:54.901 "assigned_rate_limits": { 00:17:54.901 "rw_ios_per_sec": 0, 00:17:54.901 "rw_mbytes_per_sec": 0, 00:17:54.901 "r_mbytes_per_sec": 0, 00:17:54.901 "w_mbytes_per_sec": 0 00:17:54.901 }, 00:17:54.901 "claimed": true, 00:17:54.901 "claim_type": "exclusive_write", 00:17:54.901 "zoned": false, 00:17:54.901 "supported_io_types": { 00:17:54.901 "read": true, 00:17:54.901 "write": true, 00:17:54.901 "unmap": true, 00:17:54.901 "flush": true, 00:17:54.901 "reset": true, 00:17:54.901 "nvme_admin": false, 00:17:54.901 "nvme_io": false, 00:17:54.901 "nvme_io_md": false, 00:17:54.901 "write_zeroes": true, 00:17:54.901 "zcopy": true, 00:17:54.901 "get_zone_info": false, 00:17:54.901 "zone_management": false, 00:17:54.901 "zone_append": false, 00:17:54.901 "compare": false, 00:17:54.901 "compare_and_write": false, 00:17:54.901 "abort": true, 00:17:54.901 "seek_hole": false, 00:17:54.901 "seek_data": false, 00:17:54.901 "copy": true, 00:17:54.901 "nvme_iov_md": false 00:17:54.901 }, 00:17:54.901 "memory_domains": [ 00:17:54.901 { 00:17:54.901 "dma_device_id": "system", 00:17:54.901 "dma_device_type": 1 00:17:54.901 }, 00:17:54.901 { 00:17:54.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.901 "dma_device_type": 2 00:17:54.901 } 00:17:54.901 ], 00:17:54.901 "driver_specific": {} 00:17:54.901 } 00:17:54.901 ] 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.901 11:29:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.467 11:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.467 "name": "Existed_Raid", 00:17:55.467 "uuid": "3f3118cf-705d-4a19-9baa-5b4d9683235a", 00:17:55.467 "strip_size_kb": 64, 00:17:55.467 "state": "configuring", 00:17:55.467 "raid_level": "concat", 00:17:55.467 "superblock": true, 00:17:55.467 "num_base_bdevs": 4, 00:17:55.467 "num_base_bdevs_discovered": 3, 00:17:55.467 "num_base_bdevs_operational": 4, 00:17:55.467 "base_bdevs_list": [ 00:17:55.467 { 00:17:55.467 "name": "BaseBdev1", 00:17:55.467 "uuid": "ccceb353-0e38-4a0f-8d54-4f0b57cf8981", 00:17:55.467 "is_configured": true, 00:17:55.467 "data_offset": 2048, 00:17:55.467 "data_size": 63488 00:17:55.467 }, 00:17:55.467 { 00:17:55.467 "name": null, 00:17:55.467 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:17:55.467 "is_configured": false, 00:17:55.468 "data_offset": 2048, 00:17:55.468 "data_size": 63488 00:17:55.468 }, 00:17:55.468 { 00:17:55.468 "name": "BaseBdev3", 00:17:55.468 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:17:55.468 "is_configured": true, 00:17:55.468 "data_offset": 2048, 00:17:55.468 "data_size": 63488 00:17:55.468 }, 00:17:55.468 { 00:17:55.468 "name": "BaseBdev4", 00:17:55.468 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:17:55.468 "is_configured": true, 00:17:55.468 "data_offset": 2048, 00:17:55.468 "data_size": 63488 00:17:55.468 } 00:17:55.468 ] 00:17:55.468 }' 00:17:55.468 11:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.468 11:29:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.037 11:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.037 11:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:56.295 11:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:56.295 11:29:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:56.554 [2024-07-25 11:29:12.180668] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.554 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.812 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.812 "name": "Existed_Raid", 00:17:56.812 "uuid": "3f3118cf-705d-4a19-9baa-5b4d9683235a", 00:17:56.812 "strip_size_kb": 64, 00:17:56.812 "state": "configuring", 00:17:56.812 "raid_level": "concat", 00:17:56.812 "superblock": true, 00:17:56.812 "num_base_bdevs": 4, 00:17:56.812 "num_base_bdevs_discovered": 2, 00:17:56.812 "num_base_bdevs_operational": 4, 00:17:56.812 "base_bdevs_list": [ 00:17:56.812 { 00:17:56.812 "name": "BaseBdev1", 00:17:56.812 "uuid": "ccceb353-0e38-4a0f-8d54-4f0b57cf8981", 00:17:56.812 "is_configured": true, 00:17:56.812 "data_offset": 2048, 00:17:56.812 "data_size": 63488 00:17:56.812 }, 00:17:56.812 { 00:17:56.812 "name": null, 00:17:56.813 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:17:56.813 "is_configured": false, 00:17:56.813 "data_offset": 2048, 00:17:56.813 "data_size": 63488 00:17:56.813 }, 00:17:56.813 { 00:17:56.813 "name": null, 00:17:56.813 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:17:56.813 "is_configured": false, 00:17:56.813 "data_offset": 2048, 00:17:56.813 "data_size": 63488 00:17:56.813 }, 00:17:56.813 { 00:17:56.813 "name": "BaseBdev4", 00:17:56.813 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:17:56.813 "is_configured": true, 00:17:56.813 "data_offset": 2048, 00:17:56.813 "data_size": 63488 00:17:56.813 } 00:17:56.813 ] 00:17:56.813 }' 00:17:56.813 11:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.813 11:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.379 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:57.379 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.637 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:57.637 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:57.895 [2024-07-25 11:29:13.681143] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.895 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.153 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.153 "name": "Existed_Raid", 00:17:58.153 "uuid": "3f3118cf-705d-4a19-9baa-5b4d9683235a", 00:17:58.153 "strip_size_kb": 64, 00:17:58.153 "state": "configuring", 00:17:58.153 "raid_level": "concat", 00:17:58.153 "superblock": true, 00:17:58.153 "num_base_bdevs": 4, 00:17:58.153 "num_base_bdevs_discovered": 3, 00:17:58.153 "num_base_bdevs_operational": 4, 00:17:58.153 "base_bdevs_list": [ 00:17:58.153 { 00:17:58.153 "name": "BaseBdev1", 00:17:58.153 "uuid": "ccceb353-0e38-4a0f-8d54-4f0b57cf8981", 00:17:58.153 "is_configured": true, 00:17:58.153 "data_offset": 2048, 00:17:58.153 "data_size": 63488 00:17:58.153 }, 00:17:58.153 { 00:17:58.153 "name": null, 00:17:58.153 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:17:58.153 "is_configured": false, 00:17:58.153 "data_offset": 2048, 00:17:58.153 "data_size": 63488 00:17:58.153 }, 00:17:58.153 { 00:17:58.153 "name": "BaseBdev3", 00:17:58.153 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:17:58.153 "is_configured": true, 00:17:58.153 "data_offset": 2048, 00:17:58.153 "data_size": 63488 00:17:58.153 }, 00:17:58.153 { 00:17:58.153 "name": "BaseBdev4", 00:17:58.153 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:17:58.153 "is_configured": true, 00:17:58.153 "data_offset": 2048, 00:17:58.153 "data_size": 63488 00:17:58.153 } 00:17:58.153 ] 00:17:58.153 }' 00:17:58.153 11:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.153 11:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.085 11:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.085 11:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:59.344 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:59.344 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:59.344 [2024-07-25 11:29:15.217584] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.602 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:59.602 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:59.602 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:59.602 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:59.602 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:59.602 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:59.602 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.603 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.603 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.603 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.603 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.603 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.861 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.861 "name": "Existed_Raid", 00:17:59.861 "uuid": "3f3118cf-705d-4a19-9baa-5b4d9683235a", 00:17:59.861 "strip_size_kb": 64, 00:17:59.861 "state": "configuring", 00:17:59.861 "raid_level": "concat", 00:17:59.861 "superblock": true, 00:17:59.861 "num_base_bdevs": 4, 00:17:59.861 "num_base_bdevs_discovered": 2, 00:17:59.861 "num_base_bdevs_operational": 4, 00:17:59.861 "base_bdevs_list": [ 00:17:59.861 { 00:17:59.861 "name": null, 00:17:59.861 "uuid": "ccceb353-0e38-4a0f-8d54-4f0b57cf8981", 00:17:59.861 "is_configured": false, 00:17:59.861 "data_offset": 2048, 00:17:59.861 "data_size": 63488 00:17:59.861 }, 00:17:59.861 { 00:17:59.861 "name": null, 00:17:59.861 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:17:59.861 "is_configured": false, 00:17:59.861 "data_offset": 2048, 00:17:59.861 "data_size": 63488 00:17:59.861 }, 00:17:59.861 { 00:17:59.861 "name": "BaseBdev3", 00:17:59.861 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:17:59.861 "is_configured": true, 00:17:59.861 "data_offset": 2048, 00:17:59.861 "data_size": 63488 00:17:59.861 }, 00:17:59.861 { 00:17:59.861 "name": "BaseBdev4", 00:17:59.861 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:17:59.861 "is_configured": true, 00:17:59.861 "data_offset": 2048, 00:17:59.861 "data_size": 63488 00:17:59.861 } 00:17:59.861 ] 00:17:59.861 }' 00:17:59.861 11:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.861 11:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.426 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.426 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:00.684 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:00.684 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:00.942 [2024-07-25 11:29:16.739340] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.942 11:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.507 11:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.507 "name": "Existed_Raid", 00:18:01.507 "uuid": "3f3118cf-705d-4a19-9baa-5b4d9683235a", 00:18:01.507 "strip_size_kb": 64, 00:18:01.507 "state": "configuring", 00:18:01.507 "raid_level": "concat", 00:18:01.507 "superblock": true, 00:18:01.507 "num_base_bdevs": 4, 00:18:01.507 "num_base_bdevs_discovered": 3, 00:18:01.507 "num_base_bdevs_operational": 4, 00:18:01.507 "base_bdevs_list": [ 00:18:01.507 { 00:18:01.507 "name": null, 00:18:01.507 "uuid": "ccceb353-0e38-4a0f-8d54-4f0b57cf8981", 00:18:01.507 "is_configured": false, 00:18:01.507 "data_offset": 2048, 00:18:01.507 "data_size": 63488 00:18:01.507 }, 00:18:01.507 { 00:18:01.507 "name": "BaseBdev2", 00:18:01.507 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:18:01.507 "is_configured": true, 00:18:01.507 "data_offset": 2048, 00:18:01.507 "data_size": 63488 00:18:01.507 }, 00:18:01.507 { 00:18:01.507 "name": "BaseBdev3", 00:18:01.507 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:18:01.507 "is_configured": true, 00:18:01.507 "data_offset": 2048, 00:18:01.507 "data_size": 63488 00:18:01.507 }, 00:18:01.507 { 00:18:01.507 "name": "BaseBdev4", 00:18:01.507 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:18:01.507 "is_configured": true, 00:18:01.507 "data_offset": 2048, 00:18:01.507 "data_size": 63488 00:18:01.507 } 00:18:01.507 ] 00:18:01.507 }' 00:18:01.507 11:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.507 11:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.092 11:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:02.092 11:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.349 11:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:02.349 11:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:02.349 11:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.607 11:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ccceb353-0e38-4a0f-8d54-4f0b57cf8981 00:18:02.864 [2024-07-25 11:29:18.626678] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:02.864 [2024-07-25 11:29:18.626979] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:02.864 [2024-07-25 11:29:18.627009] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:02.864 [2024-07-25 11:29:18.627315] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:02.864 [2024-07-25 11:29:18.627544] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:02.864 [2024-07-25 11:29:18.627563] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:02.864 [2024-07-25 11:29:18.627757] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.864 NewBaseBdev 00:18:02.864 11:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:02.864 11:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:02.864 11:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:02.864 11:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:02.864 11:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:02.864 11:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:02.864 11:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:03.123 11:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:03.381 [ 00:18:03.381 { 00:18:03.381 "name": "NewBaseBdev", 00:18:03.381 "aliases": [ 00:18:03.381 "ccceb353-0e38-4a0f-8d54-4f0b57cf8981" 00:18:03.381 ], 00:18:03.381 "product_name": "Malloc disk", 00:18:03.381 "block_size": 512, 00:18:03.381 "num_blocks": 65536, 00:18:03.381 "uuid": "ccceb353-0e38-4a0f-8d54-4f0b57cf8981", 00:18:03.381 "assigned_rate_limits": { 00:18:03.381 "rw_ios_per_sec": 0, 00:18:03.381 "rw_mbytes_per_sec": 0, 00:18:03.381 "r_mbytes_per_sec": 0, 00:18:03.381 "w_mbytes_per_sec": 0 00:18:03.381 }, 00:18:03.381 "claimed": true, 00:18:03.381 "claim_type": "exclusive_write", 00:18:03.381 "zoned": false, 00:18:03.381 "supported_io_types": { 00:18:03.381 "read": true, 00:18:03.381 "write": true, 00:18:03.381 "unmap": true, 00:18:03.381 "flush": true, 00:18:03.381 "reset": true, 00:18:03.381 "nvme_admin": false, 00:18:03.381 "nvme_io": false, 00:18:03.381 "nvme_io_md": false, 00:18:03.381 "write_zeroes": true, 00:18:03.381 "zcopy": true, 00:18:03.381 "get_zone_info": false, 00:18:03.381 "zone_management": false, 00:18:03.381 "zone_append": false, 00:18:03.381 "compare": false, 00:18:03.381 "compare_and_write": false, 00:18:03.381 "abort": true, 00:18:03.381 "seek_hole": false, 00:18:03.381 "seek_data": false, 00:18:03.381 "copy": true, 00:18:03.381 "nvme_iov_md": false 00:18:03.381 }, 00:18:03.381 "memory_domains": [ 00:18:03.381 { 00:18:03.381 "dma_device_id": "system", 00:18:03.381 "dma_device_type": 1 00:18:03.381 }, 00:18:03.381 { 00:18:03.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.381 "dma_device_type": 2 00:18:03.381 } 00:18:03.381 ], 00:18:03.381 "driver_specific": {} 00:18:03.381 } 00:18:03.381 ] 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.381 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.640 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.640 "name": "Existed_Raid", 00:18:03.640 "uuid": "3f3118cf-705d-4a19-9baa-5b4d9683235a", 00:18:03.640 "strip_size_kb": 64, 00:18:03.640 "state": "online", 00:18:03.640 "raid_level": "concat", 00:18:03.640 "superblock": true, 00:18:03.640 "num_base_bdevs": 4, 00:18:03.640 "num_base_bdevs_discovered": 4, 00:18:03.640 "num_base_bdevs_operational": 4, 00:18:03.640 "base_bdevs_list": [ 00:18:03.640 { 00:18:03.640 "name": "NewBaseBdev", 00:18:03.640 "uuid": "ccceb353-0e38-4a0f-8d54-4f0b57cf8981", 00:18:03.640 "is_configured": true, 00:18:03.640 "data_offset": 2048, 00:18:03.640 "data_size": 63488 00:18:03.640 }, 00:18:03.640 { 00:18:03.640 "name": "BaseBdev2", 00:18:03.640 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:18:03.640 "is_configured": true, 00:18:03.640 "data_offset": 2048, 00:18:03.640 "data_size": 63488 00:18:03.640 }, 00:18:03.640 { 00:18:03.640 "name": "BaseBdev3", 00:18:03.640 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:18:03.640 "is_configured": true, 00:18:03.640 "data_offset": 2048, 00:18:03.640 "data_size": 63488 00:18:03.640 }, 00:18:03.640 { 00:18:03.640 "name": "BaseBdev4", 00:18:03.640 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:18:03.640 "is_configured": true, 00:18:03.640 "data_offset": 2048, 00:18:03.640 "data_size": 63488 00:18:03.640 } 00:18:03.640 ] 00:18:03.640 }' 00:18:03.640 11:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.640 11:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.574 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:04.574 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:04.574 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:04.574 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:04.574 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:04.574 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:04.574 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:04.574 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:04.574 [2024-07-25 11:29:20.427693] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.574 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:04.574 "name": "Existed_Raid", 00:18:04.574 "aliases": [ 00:18:04.574 "3f3118cf-705d-4a19-9baa-5b4d9683235a" 00:18:04.574 ], 00:18:04.574 "product_name": "Raid Volume", 00:18:04.574 "block_size": 512, 00:18:04.574 "num_blocks": 253952, 00:18:04.574 "uuid": "3f3118cf-705d-4a19-9baa-5b4d9683235a", 00:18:04.574 "assigned_rate_limits": { 00:18:04.574 "rw_ios_per_sec": 0, 00:18:04.574 "rw_mbytes_per_sec": 0, 00:18:04.574 "r_mbytes_per_sec": 0, 00:18:04.574 "w_mbytes_per_sec": 0 00:18:04.574 }, 00:18:04.574 "claimed": false, 00:18:04.574 "zoned": false, 00:18:04.574 "supported_io_types": { 00:18:04.574 "read": true, 00:18:04.574 "write": true, 00:18:04.574 "unmap": true, 00:18:04.574 "flush": true, 00:18:04.574 "reset": true, 00:18:04.574 "nvme_admin": false, 00:18:04.574 "nvme_io": false, 00:18:04.574 "nvme_io_md": false, 00:18:04.574 "write_zeroes": true, 00:18:04.574 "zcopy": false, 00:18:04.574 "get_zone_info": false, 00:18:04.574 "zone_management": false, 00:18:04.574 "zone_append": false, 00:18:04.574 "compare": false, 00:18:04.574 "compare_and_write": false, 00:18:04.574 "abort": false, 00:18:04.574 "seek_hole": false, 00:18:04.574 "seek_data": false, 00:18:04.574 "copy": false, 00:18:04.574 "nvme_iov_md": false 00:18:04.574 }, 00:18:04.574 "memory_domains": [ 00:18:04.574 { 00:18:04.575 "dma_device_id": "system", 00:18:04.575 "dma_device_type": 1 00:18:04.575 }, 00:18:04.575 { 00:18:04.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.575 "dma_device_type": 2 00:18:04.575 }, 00:18:04.575 { 00:18:04.575 "dma_device_id": "system", 00:18:04.575 "dma_device_type": 1 00:18:04.575 }, 00:18:04.575 { 00:18:04.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.575 "dma_device_type": 2 00:18:04.575 }, 00:18:04.575 { 00:18:04.575 "dma_device_id": "system", 00:18:04.575 "dma_device_type": 1 00:18:04.575 }, 00:18:04.575 { 00:18:04.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.575 "dma_device_type": 2 00:18:04.575 }, 00:18:04.575 { 00:18:04.575 "dma_device_id": "system", 00:18:04.575 "dma_device_type": 1 00:18:04.575 }, 00:18:04.575 { 00:18:04.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.575 "dma_device_type": 2 00:18:04.575 } 00:18:04.575 ], 00:18:04.575 "driver_specific": { 00:18:04.575 "raid": { 00:18:04.575 "uuid": "3f3118cf-705d-4a19-9baa-5b4d9683235a", 00:18:04.575 "strip_size_kb": 64, 00:18:04.575 "state": "online", 00:18:04.575 "raid_level": "concat", 00:18:04.575 "superblock": true, 00:18:04.575 "num_base_bdevs": 4, 00:18:04.575 "num_base_bdevs_discovered": 4, 00:18:04.575 "num_base_bdevs_operational": 4, 00:18:04.575 "base_bdevs_list": [ 00:18:04.575 { 00:18:04.575 "name": "NewBaseBdev", 00:18:04.575 "uuid": "ccceb353-0e38-4a0f-8d54-4f0b57cf8981", 00:18:04.575 "is_configured": true, 00:18:04.575 "data_offset": 2048, 00:18:04.575 "data_size": 63488 00:18:04.575 }, 00:18:04.575 { 00:18:04.575 "name": "BaseBdev2", 00:18:04.575 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:18:04.575 "is_configured": true, 00:18:04.575 "data_offset": 2048, 00:18:04.575 "data_size": 63488 00:18:04.575 }, 00:18:04.575 { 00:18:04.575 "name": "BaseBdev3", 00:18:04.575 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:18:04.575 "is_configured": true, 00:18:04.575 "data_offset": 2048, 00:18:04.575 "data_size": 63488 00:18:04.575 }, 00:18:04.575 { 00:18:04.575 "name": "BaseBdev4", 00:18:04.575 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:18:04.575 "is_configured": true, 00:18:04.575 "data_offset": 2048, 00:18:04.575 "data_size": 63488 00:18:04.575 } 00:18:04.575 ] 00:18:04.575 } 00:18:04.575 } 00:18:04.575 }' 00:18:04.575 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.832 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:04.832 BaseBdev2 00:18:04.832 BaseBdev3 00:18:04.832 BaseBdev4' 00:18:04.832 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:04.832 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:04.832 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.091 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.091 "name": "NewBaseBdev", 00:18:05.091 "aliases": [ 00:18:05.091 "ccceb353-0e38-4a0f-8d54-4f0b57cf8981" 00:18:05.091 ], 00:18:05.091 "product_name": "Malloc disk", 00:18:05.091 "block_size": 512, 00:18:05.091 "num_blocks": 65536, 00:18:05.091 "uuid": "ccceb353-0e38-4a0f-8d54-4f0b57cf8981", 00:18:05.091 "assigned_rate_limits": { 00:18:05.091 "rw_ios_per_sec": 0, 00:18:05.091 "rw_mbytes_per_sec": 0, 00:18:05.091 "r_mbytes_per_sec": 0, 00:18:05.091 "w_mbytes_per_sec": 0 00:18:05.091 }, 00:18:05.091 "claimed": true, 00:18:05.091 "claim_type": "exclusive_write", 00:18:05.091 "zoned": false, 00:18:05.091 "supported_io_types": { 00:18:05.091 "read": true, 00:18:05.091 "write": true, 00:18:05.091 "unmap": true, 00:18:05.091 "flush": true, 00:18:05.091 "reset": true, 00:18:05.091 "nvme_admin": false, 00:18:05.091 "nvme_io": false, 00:18:05.091 "nvme_io_md": false, 00:18:05.091 "write_zeroes": true, 00:18:05.091 "zcopy": true, 00:18:05.091 "get_zone_info": false, 00:18:05.091 "zone_management": false, 00:18:05.091 "zone_append": false, 00:18:05.091 "compare": false, 00:18:05.091 "compare_and_write": false, 00:18:05.091 "abort": true, 00:18:05.091 "seek_hole": false, 00:18:05.091 "seek_data": false, 00:18:05.091 "copy": true, 00:18:05.091 "nvme_iov_md": false 00:18:05.091 }, 00:18:05.091 "memory_domains": [ 00:18:05.091 { 00:18:05.091 "dma_device_id": "system", 00:18:05.091 "dma_device_type": 1 00:18:05.091 }, 00:18:05.091 { 00:18:05.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.091 "dma_device_type": 2 00:18:05.091 } 00:18:05.091 ], 00:18:05.091 "driver_specific": {} 00:18:05.091 }' 00:18:05.091 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.091 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.091 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:05.091 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.091 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.091 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:05.091 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.349 11:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.349 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:05.349 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.349 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.349 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:05.349 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.349 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.349 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:05.607 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.607 "name": "BaseBdev2", 00:18:05.607 "aliases": [ 00:18:05.607 "e13564f0-3369-482a-acde-681d7fc051ed" 00:18:05.607 ], 00:18:05.607 "product_name": "Malloc disk", 00:18:05.607 "block_size": 512, 00:18:05.607 "num_blocks": 65536, 00:18:05.607 "uuid": "e13564f0-3369-482a-acde-681d7fc051ed", 00:18:05.607 "assigned_rate_limits": { 00:18:05.607 "rw_ios_per_sec": 0, 00:18:05.607 "rw_mbytes_per_sec": 0, 00:18:05.607 "r_mbytes_per_sec": 0, 00:18:05.607 "w_mbytes_per_sec": 0 00:18:05.607 }, 00:18:05.607 "claimed": true, 00:18:05.607 "claim_type": "exclusive_write", 00:18:05.607 "zoned": false, 00:18:05.607 "supported_io_types": { 00:18:05.607 "read": true, 00:18:05.608 "write": true, 00:18:05.608 "unmap": true, 00:18:05.608 "flush": true, 00:18:05.608 "reset": true, 00:18:05.608 "nvme_admin": false, 00:18:05.608 "nvme_io": false, 00:18:05.608 "nvme_io_md": false, 00:18:05.608 "write_zeroes": true, 00:18:05.608 "zcopy": true, 00:18:05.608 "get_zone_info": false, 00:18:05.608 "zone_management": false, 00:18:05.608 "zone_append": false, 00:18:05.608 "compare": false, 00:18:05.608 "compare_and_write": false, 00:18:05.608 "abort": true, 00:18:05.608 "seek_hole": false, 00:18:05.608 "seek_data": false, 00:18:05.608 "copy": true, 00:18:05.608 "nvme_iov_md": false 00:18:05.608 }, 00:18:05.608 "memory_domains": [ 00:18:05.608 { 00:18:05.608 "dma_device_id": "system", 00:18:05.608 "dma_device_type": 1 00:18:05.608 }, 00:18:05.608 { 00:18:05.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.608 "dma_device_type": 2 00:18:05.608 } 00:18:05.608 ], 00:18:05.608 "driver_specific": {} 00:18:05.608 }' 00:18:05.608 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.608 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.608 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:05.608 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.866 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.866 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:05.866 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.866 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.866 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:05.866 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.866 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.124 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.124 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.124 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:06.124 11:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.382 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.382 "name": "BaseBdev3", 00:18:06.382 "aliases": [ 00:18:06.382 "5612c8a0-1027-4b5a-8913-19a377ffab9c" 00:18:06.382 ], 00:18:06.382 "product_name": "Malloc disk", 00:18:06.382 "block_size": 512, 00:18:06.382 "num_blocks": 65536, 00:18:06.382 "uuid": "5612c8a0-1027-4b5a-8913-19a377ffab9c", 00:18:06.382 "assigned_rate_limits": { 00:18:06.382 "rw_ios_per_sec": 0, 00:18:06.382 "rw_mbytes_per_sec": 0, 00:18:06.382 "r_mbytes_per_sec": 0, 00:18:06.382 "w_mbytes_per_sec": 0 00:18:06.382 }, 00:18:06.382 "claimed": true, 00:18:06.382 "claim_type": "exclusive_write", 00:18:06.382 "zoned": false, 00:18:06.382 "supported_io_types": { 00:18:06.382 "read": true, 00:18:06.382 "write": true, 00:18:06.382 "unmap": true, 00:18:06.382 "flush": true, 00:18:06.382 "reset": true, 00:18:06.382 "nvme_admin": false, 00:18:06.382 "nvme_io": false, 00:18:06.382 "nvme_io_md": false, 00:18:06.382 "write_zeroes": true, 00:18:06.382 "zcopy": true, 00:18:06.382 "get_zone_info": false, 00:18:06.382 "zone_management": false, 00:18:06.382 "zone_append": false, 00:18:06.382 "compare": false, 00:18:06.382 "compare_and_write": false, 00:18:06.382 "abort": true, 00:18:06.382 "seek_hole": false, 00:18:06.382 "seek_data": false, 00:18:06.382 "copy": true, 00:18:06.382 "nvme_iov_md": false 00:18:06.382 }, 00:18:06.382 "memory_domains": [ 00:18:06.382 { 00:18:06.382 "dma_device_id": "system", 00:18:06.382 "dma_device_type": 1 00:18:06.382 }, 00:18:06.382 { 00:18:06.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.382 "dma_device_type": 2 00:18:06.382 } 00:18:06.382 ], 00:18:06.382 "driver_specific": {} 00:18:06.382 }' 00:18:06.382 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.382 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.382 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.382 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.382 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.382 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.382 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.641 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.641 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.641 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.641 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.641 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.641 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.641 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:06.641 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.899 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.899 "name": "BaseBdev4", 00:18:06.899 "aliases": [ 00:18:06.899 "8607c9db-ec4e-4cb4-b427-6d250382105e" 00:18:06.899 ], 00:18:06.899 "product_name": "Malloc disk", 00:18:06.899 "block_size": 512, 00:18:06.899 "num_blocks": 65536, 00:18:06.899 "uuid": "8607c9db-ec4e-4cb4-b427-6d250382105e", 00:18:06.899 "assigned_rate_limits": { 00:18:06.899 "rw_ios_per_sec": 0, 00:18:06.899 "rw_mbytes_per_sec": 0, 00:18:06.899 "r_mbytes_per_sec": 0, 00:18:06.899 "w_mbytes_per_sec": 0 00:18:06.899 }, 00:18:06.899 "claimed": true, 00:18:06.899 "claim_type": "exclusive_write", 00:18:06.899 "zoned": false, 00:18:06.899 "supported_io_types": { 00:18:06.899 "read": true, 00:18:06.899 "write": true, 00:18:06.899 "unmap": true, 00:18:06.899 "flush": true, 00:18:06.899 "reset": true, 00:18:06.899 "nvme_admin": false, 00:18:06.899 "nvme_io": false, 00:18:06.899 "nvme_io_md": false, 00:18:06.899 "write_zeroes": true, 00:18:06.899 "zcopy": true, 00:18:06.899 "get_zone_info": false, 00:18:06.899 "zone_management": false, 00:18:06.899 "zone_append": false, 00:18:06.899 "compare": false, 00:18:06.899 "compare_and_write": false, 00:18:06.899 "abort": true, 00:18:06.899 "seek_hole": false, 00:18:06.899 "seek_data": false, 00:18:06.899 "copy": true, 00:18:06.899 "nvme_iov_md": false 00:18:06.899 }, 00:18:06.899 "memory_domains": [ 00:18:06.899 { 00:18:06.899 "dma_device_id": "system", 00:18:06.899 "dma_device_type": 1 00:18:06.899 }, 00:18:06.899 { 00:18:06.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.899 "dma_device_type": 2 00:18:06.899 } 00:18:06.899 ], 00:18:06.899 "driver_specific": {} 00:18:06.899 }' 00:18:06.899 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.899 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.156 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:07.156 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.156 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.156 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:07.156 11:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.156 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.414 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.414 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.414 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.414 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.414 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:07.672 [2024-07-25 11:29:23.416047] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.672 [2024-07-25 11:29:23.416094] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.672 [2024-07-25 11:29:23.416207] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.672 [2024-07-25 11:29:23.416303] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.672 [2024-07-25 11:29:23.416323] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 80256 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80256 ']' 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80256 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80256 00:18:07.672 killing process with pid 80256 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80256' 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80256 00:18:07.672 [2024-07-25 11:29:23.462818] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.672 11:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80256 00:18:08.236 [2024-07-25 11:29:23.826642] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.167 ************************************ 00:18:09.167 END TEST raid_state_function_test_sb 00:18:09.167 ************************************ 00:18:09.168 11:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:09.168 00:18:09.168 real 0m37.878s 00:18:09.168 user 1m9.485s 00:18:09.168 sys 0m4.969s 00:18:09.168 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:09.168 11:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.426 11:29:25 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:09.426 11:29:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:09.426 11:29:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:09.426 11:29:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.426 ************************************ 00:18:09.426 START TEST raid_superblock_test 00:18:09.426 ************************************ 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=81350 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 81350 /var/tmp/spdk-raid.sock 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81350 ']' 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:09.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:09.426 11:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.426 [2024-07-25 11:29:25.197833] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:18:09.426 [2024-07-25 11:29:25.198007] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81350 ] 00:18:09.683 [2024-07-25 11:29:25.375532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.941 [2024-07-25 11:29:25.659969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.199 [2024-07-25 11:29:25.864768] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.199 [2024-07-25 11:29:25.864845] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.458 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:10.716 malloc1 00:18:10.716 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:10.716 [2024-07-25 11:29:26.575913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:10.716 [2024-07-25 11:29:26.576282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.716 [2024-07-25 11:29:26.576442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:10.716 [2024-07-25 11:29:26.576596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.716 [2024-07-25 11:29:26.579586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.716 [2024-07-25 11:29:26.579783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:10.716 pt1 00:18:10.973 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:10.973 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:10.973 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:18:10.973 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:18:10.973 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:10.973 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:10.973 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:10.973 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:10.973 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:11.231 malloc2 00:18:11.231 11:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.488 [2024-07-25 11:29:27.143301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.488 [2024-07-25 11:29:27.143400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.488 [2024-07-25 11:29:27.143433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:11.488 [2024-07-25 11:29:27.143456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.488 [2024-07-25 11:29:27.146686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.488 [2024-07-25 11:29:27.146738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.488 pt2 00:18:11.488 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:11.488 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:11.488 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:18:11.488 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:18:11.488 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:11.488 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:11.488 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:11.488 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:11.488 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:11.745 malloc3 00:18:11.745 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:12.003 [2024-07-25 11:29:27.667403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:12.003 [2024-07-25 11:29:27.667484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.003 [2024-07-25 11:29:27.667517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:12.003 [2024-07-25 11:29:27.667536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.003 [2024-07-25 11:29:27.670290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.003 [2024-07-25 11:29:27.670341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:12.003 pt3 00:18:12.003 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:12.003 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:12.003 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:18:12.003 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:18:12.003 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:12.003 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:12.003 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:12.004 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:12.004 11:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:12.267 malloc4 00:18:12.267 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:12.526 [2024-07-25 11:29:28.251998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:12.526 [2024-07-25 11:29:28.252092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.526 [2024-07-25 11:29:28.252122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:12.526 [2024-07-25 11:29:28.252140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.526 [2024-07-25 11:29:28.254954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.526 [2024-07-25 11:29:28.255029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:12.526 pt4 00:18:12.526 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:12.526 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:12.526 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:12.794 [2024-07-25 11:29:28.580240] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:12.794 [2024-07-25 11:29:28.582934] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.794 [2024-07-25 11:29:28.583051] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:12.794 [2024-07-25 11:29:28.583132] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:12.794 [2024-07-25 11:29:28.583436] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:12.794 [2024-07-25 11:29:28.583461] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:12.794 [2024-07-25 11:29:28.583918] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:12.794 [2024-07-25 11:29:28.584171] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:12.794 [2024-07-25 11:29:28.584193] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:12.794 [2024-07-25 11:29:28.584508] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.794 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.052 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.052 "name": "raid_bdev1", 00:18:13.052 "uuid": "6a136ecf-87c4-4f7f-bdec-92a055b441be", 00:18:13.052 "strip_size_kb": 64, 00:18:13.052 "state": "online", 00:18:13.052 "raid_level": "concat", 00:18:13.052 "superblock": true, 00:18:13.052 "num_base_bdevs": 4, 00:18:13.052 "num_base_bdevs_discovered": 4, 00:18:13.052 "num_base_bdevs_operational": 4, 00:18:13.052 "base_bdevs_list": [ 00:18:13.052 { 00:18:13.052 "name": "pt1", 00:18:13.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.052 "is_configured": true, 00:18:13.052 "data_offset": 2048, 00:18:13.052 "data_size": 63488 00:18:13.052 }, 00:18:13.052 { 00:18:13.052 "name": "pt2", 00:18:13.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.052 "is_configured": true, 00:18:13.052 "data_offset": 2048, 00:18:13.052 "data_size": 63488 00:18:13.052 }, 00:18:13.052 { 00:18:13.052 "name": "pt3", 00:18:13.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.052 "is_configured": true, 00:18:13.052 "data_offset": 2048, 00:18:13.052 "data_size": 63488 00:18:13.052 }, 00:18:13.052 { 00:18:13.052 "name": "pt4", 00:18:13.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:13.052 "is_configured": true, 00:18:13.052 "data_offset": 2048, 00:18:13.052 "data_size": 63488 00:18:13.052 } 00:18:13.052 ] 00:18:13.052 }' 00:18:13.052 11:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.052 11:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.987 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:18:13.987 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:13.987 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:13.987 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:13.987 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:13.987 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:13.987 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:13.987 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:13.987 [2024-07-25 11:29:29.853276] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.245 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:14.245 "name": "raid_bdev1", 00:18:14.245 "aliases": [ 00:18:14.245 "6a136ecf-87c4-4f7f-bdec-92a055b441be" 00:18:14.245 ], 00:18:14.245 "product_name": "Raid Volume", 00:18:14.245 "block_size": 512, 00:18:14.245 "num_blocks": 253952, 00:18:14.245 "uuid": "6a136ecf-87c4-4f7f-bdec-92a055b441be", 00:18:14.245 "assigned_rate_limits": { 00:18:14.245 "rw_ios_per_sec": 0, 00:18:14.245 "rw_mbytes_per_sec": 0, 00:18:14.245 "r_mbytes_per_sec": 0, 00:18:14.245 "w_mbytes_per_sec": 0 00:18:14.245 }, 00:18:14.245 "claimed": false, 00:18:14.245 "zoned": false, 00:18:14.245 "supported_io_types": { 00:18:14.245 "read": true, 00:18:14.245 "write": true, 00:18:14.245 "unmap": true, 00:18:14.245 "flush": true, 00:18:14.245 "reset": true, 00:18:14.245 "nvme_admin": false, 00:18:14.245 "nvme_io": false, 00:18:14.245 "nvme_io_md": false, 00:18:14.245 "write_zeroes": true, 00:18:14.245 "zcopy": false, 00:18:14.245 "get_zone_info": false, 00:18:14.245 "zone_management": false, 00:18:14.245 "zone_append": false, 00:18:14.245 "compare": false, 00:18:14.245 "compare_and_write": false, 00:18:14.245 "abort": false, 00:18:14.245 "seek_hole": false, 00:18:14.245 "seek_data": false, 00:18:14.245 "copy": false, 00:18:14.245 "nvme_iov_md": false 00:18:14.245 }, 00:18:14.245 "memory_domains": [ 00:18:14.245 { 00:18:14.245 "dma_device_id": "system", 00:18:14.245 "dma_device_type": 1 00:18:14.245 }, 00:18:14.245 { 00:18:14.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.245 "dma_device_type": 2 00:18:14.245 }, 00:18:14.245 { 00:18:14.245 "dma_device_id": "system", 00:18:14.245 "dma_device_type": 1 00:18:14.245 }, 00:18:14.245 { 00:18:14.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.245 "dma_device_type": 2 00:18:14.245 }, 00:18:14.245 { 00:18:14.245 "dma_device_id": "system", 00:18:14.245 "dma_device_type": 1 00:18:14.245 }, 00:18:14.245 { 00:18:14.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.246 "dma_device_type": 2 00:18:14.246 }, 00:18:14.246 { 00:18:14.246 "dma_device_id": "system", 00:18:14.246 "dma_device_type": 1 00:18:14.246 }, 00:18:14.246 { 00:18:14.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.246 "dma_device_type": 2 00:18:14.246 } 00:18:14.246 ], 00:18:14.246 "driver_specific": { 00:18:14.246 "raid": { 00:18:14.246 "uuid": "6a136ecf-87c4-4f7f-bdec-92a055b441be", 00:18:14.246 "strip_size_kb": 64, 00:18:14.246 "state": "online", 00:18:14.246 "raid_level": "concat", 00:18:14.246 "superblock": true, 00:18:14.246 "num_base_bdevs": 4, 00:18:14.246 "num_base_bdevs_discovered": 4, 00:18:14.246 "num_base_bdevs_operational": 4, 00:18:14.246 "base_bdevs_list": [ 00:18:14.246 { 00:18:14.246 "name": "pt1", 00:18:14.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.246 "is_configured": true, 00:18:14.246 "data_offset": 2048, 00:18:14.246 "data_size": 63488 00:18:14.246 }, 00:18:14.246 { 00:18:14.246 "name": "pt2", 00:18:14.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.246 "is_configured": true, 00:18:14.246 "data_offset": 2048, 00:18:14.246 "data_size": 63488 00:18:14.246 }, 00:18:14.246 { 00:18:14.246 "name": "pt3", 00:18:14.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:14.246 "is_configured": true, 00:18:14.246 "data_offset": 2048, 00:18:14.246 "data_size": 63488 00:18:14.246 }, 00:18:14.246 { 00:18:14.246 "name": "pt4", 00:18:14.246 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:14.246 "is_configured": true, 00:18:14.246 "data_offset": 2048, 00:18:14.246 "data_size": 63488 00:18:14.246 } 00:18:14.246 ] 00:18:14.246 } 00:18:14.246 } 00:18:14.246 }' 00:18:14.246 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.246 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:14.246 pt2 00:18:14.246 pt3 00:18:14.246 pt4' 00:18:14.246 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.246 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:14.246 11:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.504 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:14.504 "name": "pt1", 00:18:14.504 "aliases": [ 00:18:14.504 "00000000-0000-0000-0000-000000000001" 00:18:14.504 ], 00:18:14.504 "product_name": "passthru", 00:18:14.504 "block_size": 512, 00:18:14.504 "num_blocks": 65536, 00:18:14.504 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.504 "assigned_rate_limits": { 00:18:14.504 "rw_ios_per_sec": 0, 00:18:14.504 "rw_mbytes_per_sec": 0, 00:18:14.504 "r_mbytes_per_sec": 0, 00:18:14.504 "w_mbytes_per_sec": 0 00:18:14.504 }, 00:18:14.504 "claimed": true, 00:18:14.504 "claim_type": "exclusive_write", 00:18:14.504 "zoned": false, 00:18:14.504 "supported_io_types": { 00:18:14.504 "read": true, 00:18:14.504 "write": true, 00:18:14.504 "unmap": true, 00:18:14.504 "flush": true, 00:18:14.504 "reset": true, 00:18:14.504 "nvme_admin": false, 00:18:14.504 "nvme_io": false, 00:18:14.504 "nvme_io_md": false, 00:18:14.504 "write_zeroes": true, 00:18:14.504 "zcopy": true, 00:18:14.504 "get_zone_info": false, 00:18:14.504 "zone_management": false, 00:18:14.504 "zone_append": false, 00:18:14.504 "compare": false, 00:18:14.504 "compare_and_write": false, 00:18:14.504 "abort": true, 00:18:14.504 "seek_hole": false, 00:18:14.504 "seek_data": false, 00:18:14.504 "copy": true, 00:18:14.504 "nvme_iov_md": false 00:18:14.504 }, 00:18:14.504 "memory_domains": [ 00:18:14.504 { 00:18:14.504 "dma_device_id": "system", 00:18:14.504 "dma_device_type": 1 00:18:14.504 }, 00:18:14.504 { 00:18:14.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.504 "dma_device_type": 2 00:18:14.504 } 00:18:14.504 ], 00:18:14.504 "driver_specific": { 00:18:14.504 "passthru": { 00:18:14.504 "name": "pt1", 00:18:14.504 "base_bdev_name": "malloc1" 00:18:14.504 } 00:18:14.504 } 00:18:14.504 }' 00:18:14.504 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.504 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.504 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:14.504 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.504 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.504 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:14.504 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.762 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.762 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:14.762 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.762 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.762 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:14.762 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.762 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.762 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:15.020 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.020 "name": "pt2", 00:18:15.020 "aliases": [ 00:18:15.020 "00000000-0000-0000-0000-000000000002" 00:18:15.020 ], 00:18:15.020 "product_name": "passthru", 00:18:15.020 "block_size": 512, 00:18:15.020 "num_blocks": 65536, 00:18:15.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.020 "assigned_rate_limits": { 00:18:15.020 "rw_ios_per_sec": 0, 00:18:15.020 "rw_mbytes_per_sec": 0, 00:18:15.020 "r_mbytes_per_sec": 0, 00:18:15.020 "w_mbytes_per_sec": 0 00:18:15.020 }, 00:18:15.020 "claimed": true, 00:18:15.020 "claim_type": "exclusive_write", 00:18:15.020 "zoned": false, 00:18:15.020 "supported_io_types": { 00:18:15.020 "read": true, 00:18:15.020 "write": true, 00:18:15.020 "unmap": true, 00:18:15.020 "flush": true, 00:18:15.020 "reset": true, 00:18:15.020 "nvme_admin": false, 00:18:15.020 "nvme_io": false, 00:18:15.020 "nvme_io_md": false, 00:18:15.020 "write_zeroes": true, 00:18:15.020 "zcopy": true, 00:18:15.020 "get_zone_info": false, 00:18:15.020 "zone_management": false, 00:18:15.020 "zone_append": false, 00:18:15.020 "compare": false, 00:18:15.020 "compare_and_write": false, 00:18:15.020 "abort": true, 00:18:15.020 "seek_hole": false, 00:18:15.020 "seek_data": false, 00:18:15.020 "copy": true, 00:18:15.020 "nvme_iov_md": false 00:18:15.020 }, 00:18:15.020 "memory_domains": [ 00:18:15.020 { 00:18:15.020 "dma_device_id": "system", 00:18:15.020 "dma_device_type": 1 00:18:15.020 }, 00:18:15.020 { 00:18:15.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.020 "dma_device_type": 2 00:18:15.020 } 00:18:15.020 ], 00:18:15.020 "driver_specific": { 00:18:15.020 "passthru": { 00:18:15.020 "name": "pt2", 00:18:15.020 "base_bdev_name": "malloc2" 00:18:15.020 } 00:18:15.020 } 00:18:15.020 }' 00:18:15.020 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.020 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.020 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.020 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.020 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.277 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.277 11:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.277 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.277 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:15.277 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.277 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:15.277 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:15.277 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:15.277 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:15.277 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:15.534 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:15.534 "name": "pt3", 00:18:15.534 "aliases": [ 00:18:15.534 "00000000-0000-0000-0000-000000000003" 00:18:15.534 ], 00:18:15.534 "product_name": "passthru", 00:18:15.534 "block_size": 512, 00:18:15.534 "num_blocks": 65536, 00:18:15.534 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:15.534 "assigned_rate_limits": { 00:18:15.534 "rw_ios_per_sec": 0, 00:18:15.534 "rw_mbytes_per_sec": 0, 00:18:15.534 "r_mbytes_per_sec": 0, 00:18:15.534 "w_mbytes_per_sec": 0 00:18:15.534 }, 00:18:15.534 "claimed": true, 00:18:15.534 "claim_type": "exclusive_write", 00:18:15.534 "zoned": false, 00:18:15.534 "supported_io_types": { 00:18:15.534 "read": true, 00:18:15.534 "write": true, 00:18:15.534 "unmap": true, 00:18:15.534 "flush": true, 00:18:15.534 "reset": true, 00:18:15.534 "nvme_admin": false, 00:18:15.534 "nvme_io": false, 00:18:15.534 "nvme_io_md": false, 00:18:15.534 "write_zeroes": true, 00:18:15.534 "zcopy": true, 00:18:15.534 "get_zone_info": false, 00:18:15.534 "zone_management": false, 00:18:15.534 "zone_append": false, 00:18:15.534 "compare": false, 00:18:15.534 "compare_and_write": false, 00:18:15.534 "abort": true, 00:18:15.534 "seek_hole": false, 00:18:15.534 "seek_data": false, 00:18:15.534 "copy": true, 00:18:15.534 "nvme_iov_md": false 00:18:15.534 }, 00:18:15.534 "memory_domains": [ 00:18:15.534 { 00:18:15.534 "dma_device_id": "system", 00:18:15.534 "dma_device_type": 1 00:18:15.534 }, 00:18:15.534 { 00:18:15.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.534 "dma_device_type": 2 00:18:15.534 } 00:18:15.534 ], 00:18:15.534 "driver_specific": { 00:18:15.534 "passthru": { 00:18:15.534 "name": "pt3", 00:18:15.534 "base_bdev_name": "malloc3" 00:18:15.534 } 00:18:15.534 } 00:18:15.534 }' 00:18:15.534 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.792 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:15.792 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:15.792 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.792 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:15.792 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:15.792 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:15.792 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.051 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.051 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.051 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.051 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.051 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:16.051 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:16.051 11:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:16.309 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:16.309 "name": "pt4", 00:18:16.309 "aliases": [ 00:18:16.309 "00000000-0000-0000-0000-000000000004" 00:18:16.309 ], 00:18:16.309 "product_name": "passthru", 00:18:16.309 "block_size": 512, 00:18:16.309 "num_blocks": 65536, 00:18:16.309 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:16.309 "assigned_rate_limits": { 00:18:16.309 "rw_ios_per_sec": 0, 00:18:16.309 "rw_mbytes_per_sec": 0, 00:18:16.309 "r_mbytes_per_sec": 0, 00:18:16.309 "w_mbytes_per_sec": 0 00:18:16.309 }, 00:18:16.309 "claimed": true, 00:18:16.309 "claim_type": "exclusive_write", 00:18:16.309 "zoned": false, 00:18:16.309 "supported_io_types": { 00:18:16.309 "read": true, 00:18:16.309 "write": true, 00:18:16.309 "unmap": true, 00:18:16.309 "flush": true, 00:18:16.309 "reset": true, 00:18:16.309 "nvme_admin": false, 00:18:16.309 "nvme_io": false, 00:18:16.309 "nvme_io_md": false, 00:18:16.309 "write_zeroes": true, 00:18:16.309 "zcopy": true, 00:18:16.309 "get_zone_info": false, 00:18:16.309 "zone_management": false, 00:18:16.309 "zone_append": false, 00:18:16.309 "compare": false, 00:18:16.309 "compare_and_write": false, 00:18:16.309 "abort": true, 00:18:16.309 "seek_hole": false, 00:18:16.309 "seek_data": false, 00:18:16.309 "copy": true, 00:18:16.309 "nvme_iov_md": false 00:18:16.309 }, 00:18:16.309 "memory_domains": [ 00:18:16.309 { 00:18:16.309 "dma_device_id": "system", 00:18:16.309 "dma_device_type": 1 00:18:16.309 }, 00:18:16.309 { 00:18:16.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.309 "dma_device_type": 2 00:18:16.309 } 00:18:16.309 ], 00:18:16.309 "driver_specific": { 00:18:16.309 "passthru": { 00:18:16.309 "name": "pt4", 00:18:16.309 "base_bdev_name": "malloc4" 00:18:16.309 } 00:18:16.309 } 00:18:16.309 }' 00:18:16.309 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.309 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:16.567 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:16.567 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.567 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:16.567 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:16.567 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.567 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:16.567 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:16.567 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.825 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:16.825 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:16.825 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:16.825 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:18:17.083 [2024-07-25 11:29:32.798020] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.083 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=6a136ecf-87c4-4f7f-bdec-92a055b441be 00:18:17.083 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 6a136ecf-87c4-4f7f-bdec-92a055b441be ']' 00:18:17.083 11:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:17.340 [2024-07-25 11:29:33.085693] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.340 [2024-07-25 11:29:33.085743] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.340 [2024-07-25 11:29:33.085850] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.340 [2024-07-25 11:29:33.085950] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.340 [2024-07-25 11:29:33.085967] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:17.340 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.340 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:18:17.598 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:18:17.598 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:18:17.598 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:17.598 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:17.860 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:17.860 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:18.119 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:18.119 11:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:18.378 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:18.378 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:18.637 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:18.637 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:18.896 11:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:19.464 [2024-07-25 11:29:35.046244] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:19.464 [2024-07-25 11:29:35.048666] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:19.464 [2024-07-25 11:29:35.048747] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:19.464 [2024-07-25 11:29:35.048804] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:19.464 [2024-07-25 11:29:35.048885] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:19.464 [2024-07-25 11:29:35.048974] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:19.464 [2024-07-25 11:29:35.049012] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:19.464 [2024-07-25 11:29:35.049042] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:19.464 [2024-07-25 11:29:35.049067] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.464 [2024-07-25 11:29:35.049081] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:19.464 request: 00:18:19.464 { 00:18:19.464 "name": "raid_bdev1", 00:18:19.464 "raid_level": "concat", 00:18:19.464 "base_bdevs": [ 00:18:19.464 "malloc1", 00:18:19.464 "malloc2", 00:18:19.464 "malloc3", 00:18:19.464 "malloc4" 00:18:19.464 ], 00:18:19.464 "strip_size_kb": 64, 00:18:19.464 "superblock": false, 00:18:19.464 "method": "bdev_raid_create", 00:18:19.464 "req_id": 1 00:18:19.464 } 00:18:19.464 Got JSON-RPC error response 00:18:19.464 response: 00:18:19.464 { 00:18:19.464 "code": -17, 00:18:19.464 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:19.464 } 00:18:19.464 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:19.464 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.464 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.464 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.464 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.464 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:18:19.722 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:18:19.722 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:18:19.722 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:19.980 [2024-07-25 11:29:35.610318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:19.980 [2024-07-25 11:29:35.610635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.980 [2024-07-25 11:29:35.610799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:19.980 [2024-07-25 11:29:35.610932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.980 [2024-07-25 11:29:35.613791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.980 [2024-07-25 11:29:35.613969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:19.980 [2024-07-25 11:29:35.614212] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:19.980 [2024-07-25 11:29:35.614296] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:19.980 pt1 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.980 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.238 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.238 "name": "raid_bdev1", 00:18:20.238 "uuid": "6a136ecf-87c4-4f7f-bdec-92a055b441be", 00:18:20.238 "strip_size_kb": 64, 00:18:20.238 "state": "configuring", 00:18:20.238 "raid_level": "concat", 00:18:20.238 "superblock": true, 00:18:20.238 "num_base_bdevs": 4, 00:18:20.238 "num_base_bdevs_discovered": 1, 00:18:20.238 "num_base_bdevs_operational": 4, 00:18:20.238 "base_bdevs_list": [ 00:18:20.238 { 00:18:20.238 "name": "pt1", 00:18:20.238 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.238 "is_configured": true, 00:18:20.238 "data_offset": 2048, 00:18:20.238 "data_size": 63488 00:18:20.238 }, 00:18:20.238 { 00:18:20.238 "name": null, 00:18:20.238 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.238 "is_configured": false, 00:18:20.238 "data_offset": 2048, 00:18:20.238 "data_size": 63488 00:18:20.238 }, 00:18:20.238 { 00:18:20.238 "name": null, 00:18:20.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:20.238 "is_configured": false, 00:18:20.238 "data_offset": 2048, 00:18:20.238 "data_size": 63488 00:18:20.238 }, 00:18:20.238 { 00:18:20.238 "name": null, 00:18:20.238 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:20.238 "is_configured": false, 00:18:20.238 "data_offset": 2048, 00:18:20.238 "data_size": 63488 00:18:20.238 } 00:18:20.238 ] 00:18:20.238 }' 00:18:20.238 11:29:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.238 11:29:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.805 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:18:20.805 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:21.063 [2024-07-25 11:29:36.830586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:21.063 [2024-07-25 11:29:36.830698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.063 [2024-07-25 11:29:36.830738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:21.063 [2024-07-25 11:29:36.830754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.063 [2024-07-25 11:29:36.831350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.063 [2024-07-25 11:29:36.831389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:21.063 [2024-07-25 11:29:36.831498] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:21.063 [2024-07-25 11:29:36.831532] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.063 pt2 00:18:21.063 11:29:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:21.322 [2024-07-25 11:29:37.074738] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.322 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.580 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.580 "name": "raid_bdev1", 00:18:21.580 "uuid": "6a136ecf-87c4-4f7f-bdec-92a055b441be", 00:18:21.580 "strip_size_kb": 64, 00:18:21.580 "state": "configuring", 00:18:21.580 "raid_level": "concat", 00:18:21.580 "superblock": true, 00:18:21.580 "num_base_bdevs": 4, 00:18:21.580 "num_base_bdevs_discovered": 1, 00:18:21.580 "num_base_bdevs_operational": 4, 00:18:21.580 "base_bdevs_list": [ 00:18:21.580 { 00:18:21.580 "name": "pt1", 00:18:21.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:21.580 "is_configured": true, 00:18:21.580 "data_offset": 2048, 00:18:21.580 "data_size": 63488 00:18:21.580 }, 00:18:21.580 { 00:18:21.580 "name": null, 00:18:21.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.580 "is_configured": false, 00:18:21.580 "data_offset": 2048, 00:18:21.580 "data_size": 63488 00:18:21.580 }, 00:18:21.580 { 00:18:21.580 "name": null, 00:18:21.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:21.580 "is_configured": false, 00:18:21.580 "data_offset": 2048, 00:18:21.580 "data_size": 63488 00:18:21.580 }, 00:18:21.580 { 00:18:21.580 "name": null, 00:18:21.580 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:21.580 "is_configured": false, 00:18:21.580 "data_offset": 2048, 00:18:21.580 "data_size": 63488 00:18:21.580 } 00:18:21.580 ] 00:18:21.580 }' 00:18:21.580 11:29:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.580 11:29:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.515 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:18:22.515 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:22.515 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.515 [2024-07-25 11:29:38.306992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.515 [2024-07-25 11:29:38.307101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.515 [2024-07-25 11:29:38.307135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:22.515 [2024-07-25 11:29:38.307153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.515 [2024-07-25 11:29:38.307740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.515 [2024-07-25 11:29:38.307785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.515 [2024-07-25 11:29:38.307886] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:22.515 [2024-07-25 11:29:38.307928] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.515 pt2 00:18:22.515 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:18:22.515 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:22.515 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:22.774 [2024-07-25 11:29:38.594184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:22.774 [2024-07-25 11:29:38.594303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.774 [2024-07-25 11:29:38.594350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:22.774 [2024-07-25 11:29:38.594376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.774 [2024-07-25 11:29:38.595068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.774 [2024-07-25 11:29:38.595115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:22.774 [2024-07-25 11:29:38.595249] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:22.774 [2024-07-25 11:29:38.595294] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:22.774 pt3 00:18:22.774 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:18:22.774 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:22.774 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:23.032 [2024-07-25 11:29:38.822173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:23.032 [2024-07-25 11:29:38.822268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.032 [2024-07-25 11:29:38.822298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:23.032 [2024-07-25 11:29:38.822316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.032 [2024-07-25 11:29:38.822901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.032 [2024-07-25 11:29:38.822947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:23.032 [2024-07-25 11:29:38.823050] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:23.032 [2024-07-25 11:29:38.823107] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:23.032 [2024-07-25 11:29:38.823296] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:23.032 [2024-07-25 11:29:38.823324] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:23.032 [2024-07-25 11:29:38.823650] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:23.032 [2024-07-25 11:29:38.823867] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:23.032 [2024-07-25 11:29:38.823884] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:23.032 [2024-07-25 11:29:38.824042] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.032 pt4 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.032 11:29:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.344 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.344 "name": "raid_bdev1", 00:18:23.344 "uuid": "6a136ecf-87c4-4f7f-bdec-92a055b441be", 00:18:23.344 "strip_size_kb": 64, 00:18:23.344 "state": "online", 00:18:23.344 "raid_level": "concat", 00:18:23.344 "superblock": true, 00:18:23.344 "num_base_bdevs": 4, 00:18:23.344 "num_base_bdevs_discovered": 4, 00:18:23.344 "num_base_bdevs_operational": 4, 00:18:23.344 "base_bdevs_list": [ 00:18:23.344 { 00:18:23.344 "name": "pt1", 00:18:23.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:23.344 "is_configured": true, 00:18:23.344 "data_offset": 2048, 00:18:23.344 "data_size": 63488 00:18:23.344 }, 00:18:23.344 { 00:18:23.344 "name": "pt2", 00:18:23.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.344 "is_configured": true, 00:18:23.344 "data_offset": 2048, 00:18:23.344 "data_size": 63488 00:18:23.344 }, 00:18:23.344 { 00:18:23.344 "name": "pt3", 00:18:23.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:23.344 "is_configured": true, 00:18:23.344 "data_offset": 2048, 00:18:23.344 "data_size": 63488 00:18:23.344 }, 00:18:23.344 { 00:18:23.344 "name": "pt4", 00:18:23.344 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:23.344 "is_configured": true, 00:18:23.344 "data_offset": 2048, 00:18:23.344 "data_size": 63488 00:18:23.344 } 00:18:23.344 ] 00:18:23.344 }' 00:18:23.344 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.344 11:29:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.954 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:18:23.954 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:23.954 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:23.954 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:23.954 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:23.954 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:23.954 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:23.954 11:29:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:24.213 [2024-07-25 11:29:40.042947] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.213 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:24.213 "name": "raid_bdev1", 00:18:24.213 "aliases": [ 00:18:24.213 "6a136ecf-87c4-4f7f-bdec-92a055b441be" 00:18:24.213 ], 00:18:24.213 "product_name": "Raid Volume", 00:18:24.213 "block_size": 512, 00:18:24.213 "num_blocks": 253952, 00:18:24.213 "uuid": "6a136ecf-87c4-4f7f-bdec-92a055b441be", 00:18:24.213 "assigned_rate_limits": { 00:18:24.213 "rw_ios_per_sec": 0, 00:18:24.213 "rw_mbytes_per_sec": 0, 00:18:24.213 "r_mbytes_per_sec": 0, 00:18:24.213 "w_mbytes_per_sec": 0 00:18:24.213 }, 00:18:24.213 "claimed": false, 00:18:24.213 "zoned": false, 00:18:24.213 "supported_io_types": { 00:18:24.213 "read": true, 00:18:24.213 "write": true, 00:18:24.213 "unmap": true, 00:18:24.213 "flush": true, 00:18:24.213 "reset": true, 00:18:24.213 "nvme_admin": false, 00:18:24.213 "nvme_io": false, 00:18:24.213 "nvme_io_md": false, 00:18:24.213 "write_zeroes": true, 00:18:24.213 "zcopy": false, 00:18:24.213 "get_zone_info": false, 00:18:24.213 "zone_management": false, 00:18:24.213 "zone_append": false, 00:18:24.213 "compare": false, 00:18:24.213 "compare_and_write": false, 00:18:24.213 "abort": false, 00:18:24.213 "seek_hole": false, 00:18:24.213 "seek_data": false, 00:18:24.213 "copy": false, 00:18:24.213 "nvme_iov_md": false 00:18:24.213 }, 00:18:24.213 "memory_domains": [ 00:18:24.213 { 00:18:24.213 "dma_device_id": "system", 00:18:24.213 "dma_device_type": 1 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.213 "dma_device_type": 2 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "dma_device_id": "system", 00:18:24.213 "dma_device_type": 1 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.213 "dma_device_type": 2 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "dma_device_id": "system", 00:18:24.213 "dma_device_type": 1 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.213 "dma_device_type": 2 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "dma_device_id": "system", 00:18:24.213 "dma_device_type": 1 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.213 "dma_device_type": 2 00:18:24.213 } 00:18:24.213 ], 00:18:24.213 "driver_specific": { 00:18:24.213 "raid": { 00:18:24.213 "uuid": "6a136ecf-87c4-4f7f-bdec-92a055b441be", 00:18:24.213 "strip_size_kb": 64, 00:18:24.213 "state": "online", 00:18:24.213 "raid_level": "concat", 00:18:24.213 "superblock": true, 00:18:24.213 "num_base_bdevs": 4, 00:18:24.213 "num_base_bdevs_discovered": 4, 00:18:24.213 "num_base_bdevs_operational": 4, 00:18:24.213 "base_bdevs_list": [ 00:18:24.213 { 00:18:24.213 "name": "pt1", 00:18:24.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.213 "is_configured": true, 00:18:24.213 "data_offset": 2048, 00:18:24.213 "data_size": 63488 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "name": "pt2", 00:18:24.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.213 "is_configured": true, 00:18:24.213 "data_offset": 2048, 00:18:24.213 "data_size": 63488 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "name": "pt3", 00:18:24.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:24.213 "is_configured": true, 00:18:24.213 "data_offset": 2048, 00:18:24.213 "data_size": 63488 00:18:24.213 }, 00:18:24.213 { 00:18:24.213 "name": "pt4", 00:18:24.213 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:24.213 "is_configured": true, 00:18:24.213 "data_offset": 2048, 00:18:24.213 "data_size": 63488 00:18:24.213 } 00:18:24.213 ] 00:18:24.213 } 00:18:24.213 } 00:18:24.213 }' 00:18:24.213 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:24.471 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:24.471 pt2 00:18:24.471 pt3 00:18:24.471 pt4' 00:18:24.471 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:24.471 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:24.471 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:24.730 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:24.730 "name": "pt1", 00:18:24.730 "aliases": [ 00:18:24.730 "00000000-0000-0000-0000-000000000001" 00:18:24.730 ], 00:18:24.730 "product_name": "passthru", 00:18:24.730 "block_size": 512, 00:18:24.730 "num_blocks": 65536, 00:18:24.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.730 "assigned_rate_limits": { 00:18:24.730 "rw_ios_per_sec": 0, 00:18:24.730 "rw_mbytes_per_sec": 0, 00:18:24.730 "r_mbytes_per_sec": 0, 00:18:24.730 "w_mbytes_per_sec": 0 00:18:24.730 }, 00:18:24.730 "claimed": true, 00:18:24.730 "claim_type": "exclusive_write", 00:18:24.730 "zoned": false, 00:18:24.730 "supported_io_types": { 00:18:24.730 "read": true, 00:18:24.730 "write": true, 00:18:24.730 "unmap": true, 00:18:24.730 "flush": true, 00:18:24.730 "reset": true, 00:18:24.730 "nvme_admin": false, 00:18:24.730 "nvme_io": false, 00:18:24.730 "nvme_io_md": false, 00:18:24.730 "write_zeroes": true, 00:18:24.730 "zcopy": true, 00:18:24.730 "get_zone_info": false, 00:18:24.730 "zone_management": false, 00:18:24.730 "zone_append": false, 00:18:24.730 "compare": false, 00:18:24.730 "compare_and_write": false, 00:18:24.730 "abort": true, 00:18:24.730 "seek_hole": false, 00:18:24.730 "seek_data": false, 00:18:24.730 "copy": true, 00:18:24.730 "nvme_iov_md": false 00:18:24.730 }, 00:18:24.730 "memory_domains": [ 00:18:24.730 { 00:18:24.730 "dma_device_id": "system", 00:18:24.730 "dma_device_type": 1 00:18:24.730 }, 00:18:24.730 { 00:18:24.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.730 "dma_device_type": 2 00:18:24.730 } 00:18:24.730 ], 00:18:24.730 "driver_specific": { 00:18:24.730 "passthru": { 00:18:24.730 "name": "pt1", 00:18:24.730 "base_bdev_name": "malloc1" 00:18:24.730 } 00:18:24.730 } 00:18:24.730 }' 00:18:24.730 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:24.730 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:24.730 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:24.730 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:24.730 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:24.730 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:24.730 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:24.988 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:24.988 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:24.988 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.988 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.988 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:24.988 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:24.988 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:24.988 11:29:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:25.246 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:25.246 "name": "pt2", 00:18:25.246 "aliases": [ 00:18:25.246 "00000000-0000-0000-0000-000000000002" 00:18:25.246 ], 00:18:25.246 "product_name": "passthru", 00:18:25.246 "block_size": 512, 00:18:25.246 "num_blocks": 65536, 00:18:25.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.246 "assigned_rate_limits": { 00:18:25.246 "rw_ios_per_sec": 0, 00:18:25.246 "rw_mbytes_per_sec": 0, 00:18:25.246 "r_mbytes_per_sec": 0, 00:18:25.246 "w_mbytes_per_sec": 0 00:18:25.246 }, 00:18:25.246 "claimed": true, 00:18:25.246 "claim_type": "exclusive_write", 00:18:25.246 "zoned": false, 00:18:25.246 "supported_io_types": { 00:18:25.246 "read": true, 00:18:25.246 "write": true, 00:18:25.246 "unmap": true, 00:18:25.246 "flush": true, 00:18:25.246 "reset": true, 00:18:25.246 "nvme_admin": false, 00:18:25.246 "nvme_io": false, 00:18:25.246 "nvme_io_md": false, 00:18:25.246 "write_zeroes": true, 00:18:25.246 "zcopy": true, 00:18:25.246 "get_zone_info": false, 00:18:25.246 "zone_management": false, 00:18:25.246 "zone_append": false, 00:18:25.246 "compare": false, 00:18:25.246 "compare_and_write": false, 00:18:25.246 "abort": true, 00:18:25.246 "seek_hole": false, 00:18:25.246 "seek_data": false, 00:18:25.246 "copy": true, 00:18:25.246 "nvme_iov_md": false 00:18:25.246 }, 00:18:25.246 "memory_domains": [ 00:18:25.246 { 00:18:25.246 "dma_device_id": "system", 00:18:25.246 "dma_device_type": 1 00:18:25.246 }, 00:18:25.246 { 00:18:25.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.246 "dma_device_type": 2 00:18:25.246 } 00:18:25.246 ], 00:18:25.246 "driver_specific": { 00:18:25.246 "passthru": { 00:18:25.246 "name": "pt2", 00:18:25.246 "base_bdev_name": "malloc2" 00:18:25.246 } 00:18:25.246 } 00:18:25.246 }' 00:18:25.246 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.246 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.504 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:25.504 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.504 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.504 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:25.504 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.504 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.504 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:25.504 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.763 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.763 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:25.763 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:25.763 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:25.763 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:26.022 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:26.022 "name": "pt3", 00:18:26.022 "aliases": [ 00:18:26.022 "00000000-0000-0000-0000-000000000003" 00:18:26.022 ], 00:18:26.022 "product_name": "passthru", 00:18:26.022 "block_size": 512, 00:18:26.022 "num_blocks": 65536, 00:18:26.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:26.022 "assigned_rate_limits": { 00:18:26.022 "rw_ios_per_sec": 0, 00:18:26.022 "rw_mbytes_per_sec": 0, 00:18:26.022 "r_mbytes_per_sec": 0, 00:18:26.022 "w_mbytes_per_sec": 0 00:18:26.022 }, 00:18:26.022 "claimed": true, 00:18:26.022 "claim_type": "exclusive_write", 00:18:26.022 "zoned": false, 00:18:26.022 "supported_io_types": { 00:18:26.022 "read": true, 00:18:26.022 "write": true, 00:18:26.022 "unmap": true, 00:18:26.022 "flush": true, 00:18:26.022 "reset": true, 00:18:26.022 "nvme_admin": false, 00:18:26.022 "nvme_io": false, 00:18:26.022 "nvme_io_md": false, 00:18:26.022 "write_zeroes": true, 00:18:26.022 "zcopy": true, 00:18:26.022 "get_zone_info": false, 00:18:26.022 "zone_management": false, 00:18:26.022 "zone_append": false, 00:18:26.022 "compare": false, 00:18:26.022 "compare_and_write": false, 00:18:26.022 "abort": true, 00:18:26.022 "seek_hole": false, 00:18:26.022 "seek_data": false, 00:18:26.022 "copy": true, 00:18:26.022 "nvme_iov_md": false 00:18:26.022 }, 00:18:26.022 "memory_domains": [ 00:18:26.022 { 00:18:26.022 "dma_device_id": "system", 00:18:26.022 "dma_device_type": 1 00:18:26.022 }, 00:18:26.022 { 00:18:26.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.022 "dma_device_type": 2 00:18:26.022 } 00:18:26.022 ], 00:18:26.022 "driver_specific": { 00:18:26.022 "passthru": { 00:18:26.022 "name": "pt3", 00:18:26.022 "base_bdev_name": "malloc3" 00:18:26.022 } 00:18:26.022 } 00:18:26.022 }' 00:18:26.022 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:26.022 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:26.022 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:26.022 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.298 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.298 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:26.298 11:29:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:26.298 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:26.298 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:26.298 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:26.298 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:26.578 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:26.578 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:26.578 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:26.578 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:26.578 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:26.578 "name": "pt4", 00:18:26.578 "aliases": [ 00:18:26.578 "00000000-0000-0000-0000-000000000004" 00:18:26.578 ], 00:18:26.578 "product_name": "passthru", 00:18:26.578 "block_size": 512, 00:18:26.578 "num_blocks": 65536, 00:18:26.578 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:26.578 "assigned_rate_limits": { 00:18:26.578 "rw_ios_per_sec": 0, 00:18:26.578 "rw_mbytes_per_sec": 0, 00:18:26.578 "r_mbytes_per_sec": 0, 00:18:26.578 "w_mbytes_per_sec": 0 00:18:26.578 }, 00:18:26.578 "claimed": true, 00:18:26.578 "claim_type": "exclusive_write", 00:18:26.578 "zoned": false, 00:18:26.578 "supported_io_types": { 00:18:26.578 "read": true, 00:18:26.578 "write": true, 00:18:26.578 "unmap": true, 00:18:26.578 "flush": true, 00:18:26.578 "reset": true, 00:18:26.578 "nvme_admin": false, 00:18:26.578 "nvme_io": false, 00:18:26.578 "nvme_io_md": false, 00:18:26.578 "write_zeroes": true, 00:18:26.578 "zcopy": true, 00:18:26.578 "get_zone_info": false, 00:18:26.578 "zone_management": false, 00:18:26.578 "zone_append": false, 00:18:26.578 "compare": false, 00:18:26.578 "compare_and_write": false, 00:18:26.578 "abort": true, 00:18:26.578 "seek_hole": false, 00:18:26.578 "seek_data": false, 00:18:26.578 "copy": true, 00:18:26.578 "nvme_iov_md": false 00:18:26.578 }, 00:18:26.578 "memory_domains": [ 00:18:26.578 { 00:18:26.578 "dma_device_id": "system", 00:18:26.578 "dma_device_type": 1 00:18:26.578 }, 00:18:26.578 { 00:18:26.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.578 "dma_device_type": 2 00:18:26.578 } 00:18:26.578 ], 00:18:26.578 "driver_specific": { 00:18:26.578 "passthru": { 00:18:26.578 "name": "pt4", 00:18:26.578 "base_bdev_name": "malloc4" 00:18:26.578 } 00:18:26.578 } 00:18:26.578 }' 00:18:26.578 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:26.836 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:26.836 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:26.836 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.836 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:26.836 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:26.836 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:26.836 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:27.094 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:27.094 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.094 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:27.094 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:27.094 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:27.094 11:29:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:18:27.352 [2024-07-25 11:29:43.063767] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.352 11:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 6a136ecf-87c4-4f7f-bdec-92a055b441be '!=' 6a136ecf-87c4-4f7f-bdec-92a055b441be ']' 00:18:27.352 11:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:18:27.352 11:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:27.352 11:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:27.352 11:29:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 81350 00:18:27.352 11:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81350 ']' 00:18:27.352 11:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81350 00:18:27.352 11:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:27.352 11:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:27.352 11:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81350 00:18:27.352 killing process with pid 81350 00:18:27.353 11:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:27.353 11:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:27.353 11:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81350' 00:18:27.353 11:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81350 00:18:27.353 [2024-07-25 11:29:43.120257] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.353 11:29:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81350 00:18:27.353 [2024-07-25 11:29:43.120377] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.353 [2024-07-25 11:29:43.120473] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.353 [2024-07-25 11:29:43.120493] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:27.611 [2024-07-25 11:29:43.467709] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:28.987 11:29:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:18:28.987 00:18:28.987 real 0m19.602s 00:18:28.987 user 0m34.972s 00:18:28.987 sys 0m2.460s 00:18:28.987 ************************************ 00:18:28.987 END TEST raid_superblock_test 00:18:28.987 ************************************ 00:18:28.987 11:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:28.987 11:29:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.988 11:29:44 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:18:28.988 11:29:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:28.988 11:29:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:28.988 11:29:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.988 ************************************ 00:18:28.988 START TEST raid_read_error_test 00:18:28.988 ************************************ 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.CxS58ZAcuB 00:18:28.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=81895 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 81895 /var/tmp/spdk-raid.sock 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81895 ']' 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:28.988 11:29:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.988 [2024-07-25 11:29:44.863411] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:18:28.988 [2024-07-25 11:29:44.863585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81895 ] 00:18:29.247 [2024-07-25 11:29:45.032877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.506 [2024-07-25 11:29:45.316615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.763 [2024-07-25 11:29:45.535343] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.763 [2024-07-25 11:29:45.535426] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.033 11:29:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.033 11:29:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:30.033 11:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:30.033 11:29:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:30.291 BaseBdev1_malloc 00:18:30.291 11:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:30.549 true 00:18:30.549 11:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:30.808 [2024-07-25 11:29:46.632021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:30.808 [2024-07-25 11:29:46.632110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.808 [2024-07-25 11:29:46.632178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:30.808 [2024-07-25 11:29:46.632196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.808 [2024-07-25 11:29:46.635473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.808 [2024-07-25 11:29:46.635521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:30.808 BaseBdev1 00:18:30.808 11:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:30.808 11:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:31.375 BaseBdev2_malloc 00:18:31.375 11:29:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:31.632 true 00:18:31.632 11:29:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:31.632 [2024-07-25 11:29:47.502806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:31.632 [2024-07-25 11:29:47.503180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.632 [2024-07-25 11:29:47.503274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:31.632 [2024-07-25 11:29:47.503576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.632 [2024-07-25 11:29:47.506724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.632 [2024-07-25 11:29:47.506897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:31.632 BaseBdev2 00:18:31.891 11:29:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:31.891 11:29:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:32.149 BaseBdev3_malloc 00:18:32.149 11:29:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:32.407 true 00:18:32.407 11:29:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:32.665 [2024-07-25 11:29:48.425665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:32.665 [2024-07-25 11:29:48.425827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.665 [2024-07-25 11:29:48.425877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:32.665 [2024-07-25 11:29:48.425895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.665 [2024-07-25 11:29:48.429211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.665 [2024-07-25 11:29:48.429256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:32.665 BaseBdev3 00:18:32.665 11:29:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:32.665 11:29:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:32.923 BaseBdev4_malloc 00:18:32.923 11:29:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:33.180 true 00:18:33.180 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:33.438 [2024-07-25 11:29:49.270467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:33.438 [2024-07-25 11:29:49.270604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.438 [2024-07-25 11:29:49.270672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:33.438 [2024-07-25 11:29:49.270697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.438 [2024-07-25 11:29:49.273995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.438 [2024-07-25 11:29:49.274041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:33.438 BaseBdev4 00:18:33.438 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:33.696 [2024-07-25 11:29:49.510704] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.696 [2024-07-25 11:29:49.513651] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.696 [2024-07-25 11:29:49.513789] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.696 [2024-07-25 11:29:49.513893] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:33.696 [2024-07-25 11:29:49.514253] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:33.696 [2024-07-25 11:29:49.514272] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:33.696 [2024-07-25 11:29:49.514747] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:33.696 [2024-07-25 11:29:49.515045] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:33.696 [2024-07-25 11:29:49.515084] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:33.696 [2024-07-25 11:29:49.515431] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.696 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.953 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.953 "name": "raid_bdev1", 00:18:33.953 "uuid": "5e92d50e-5eb8-4b87-bdda-cb48b6d90f53", 00:18:33.953 "strip_size_kb": 64, 00:18:33.953 "state": "online", 00:18:33.953 "raid_level": "concat", 00:18:33.953 "superblock": true, 00:18:33.953 "num_base_bdevs": 4, 00:18:33.953 "num_base_bdevs_discovered": 4, 00:18:33.953 "num_base_bdevs_operational": 4, 00:18:33.953 "base_bdevs_list": [ 00:18:33.953 { 00:18:33.953 "name": "BaseBdev1", 00:18:33.953 "uuid": "e5d6e2e8-af75-5a1b-b9e2-3bfeb5d28c1a", 00:18:33.953 "is_configured": true, 00:18:33.953 "data_offset": 2048, 00:18:33.953 "data_size": 63488 00:18:33.953 }, 00:18:33.953 { 00:18:33.953 "name": "BaseBdev2", 00:18:33.953 "uuid": "5c359ff0-1621-5154-be38-a7bc8ede672a", 00:18:33.953 "is_configured": true, 00:18:33.953 "data_offset": 2048, 00:18:33.953 "data_size": 63488 00:18:33.953 }, 00:18:33.953 { 00:18:33.953 "name": "BaseBdev3", 00:18:33.953 "uuid": "101dc6f9-3c7b-5662-8011-43c60390d0f9", 00:18:33.953 "is_configured": true, 00:18:33.953 "data_offset": 2048, 00:18:33.953 "data_size": 63488 00:18:33.953 }, 00:18:33.953 { 00:18:33.953 "name": "BaseBdev4", 00:18:33.953 "uuid": "9d6a327b-4cb1-505d-bba0-7b6db98507e4", 00:18:33.953 "is_configured": true, 00:18:33.953 "data_offset": 2048, 00:18:33.953 "data_size": 63488 00:18:33.953 } 00:18:33.953 ] 00:18:33.953 }' 00:18:33.953 11:29:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.953 11:29:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.889 11:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:34.889 11:29:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:34.889 [2024-07-25 11:29:50.609346] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:35.823 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.082 11:29:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.340 11:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:36.340 "name": "raid_bdev1", 00:18:36.340 "uuid": "5e92d50e-5eb8-4b87-bdda-cb48b6d90f53", 00:18:36.340 "strip_size_kb": 64, 00:18:36.340 "state": "online", 00:18:36.340 "raid_level": "concat", 00:18:36.340 "superblock": true, 00:18:36.340 "num_base_bdevs": 4, 00:18:36.340 "num_base_bdevs_discovered": 4, 00:18:36.340 "num_base_bdevs_operational": 4, 00:18:36.340 "base_bdevs_list": [ 00:18:36.340 { 00:18:36.340 "name": "BaseBdev1", 00:18:36.340 "uuid": "e5d6e2e8-af75-5a1b-b9e2-3bfeb5d28c1a", 00:18:36.340 "is_configured": true, 00:18:36.340 "data_offset": 2048, 00:18:36.340 "data_size": 63488 00:18:36.340 }, 00:18:36.340 { 00:18:36.340 "name": "BaseBdev2", 00:18:36.340 "uuid": "5c359ff0-1621-5154-be38-a7bc8ede672a", 00:18:36.340 "is_configured": true, 00:18:36.340 "data_offset": 2048, 00:18:36.340 "data_size": 63488 00:18:36.340 }, 00:18:36.340 { 00:18:36.340 "name": "BaseBdev3", 00:18:36.340 "uuid": "101dc6f9-3c7b-5662-8011-43c60390d0f9", 00:18:36.340 "is_configured": true, 00:18:36.340 "data_offset": 2048, 00:18:36.340 "data_size": 63488 00:18:36.340 }, 00:18:36.340 { 00:18:36.340 "name": "BaseBdev4", 00:18:36.340 "uuid": "9d6a327b-4cb1-505d-bba0-7b6db98507e4", 00:18:36.340 "is_configured": true, 00:18:36.340 "data_offset": 2048, 00:18:36.340 "data_size": 63488 00:18:36.340 } 00:18:36.340 ] 00:18:36.340 }' 00:18:36.340 11:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:36.340 11:29:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.277 11:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:37.277 [2024-07-25 11:29:53.097698] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.277 [2024-07-25 11:29:53.098054] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.277 [2024-07-25 11:29:53.101454] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.277 [2024-07-25 11:29:53.101676] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.277 [2024-07-25 11:29:53.101806] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr0 00:18:37.277 ee all in destruct 00:18:37.277 [2024-07-25 11:29:53.102078] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 81895 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81895 ']' 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81895 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81895 00:18:37.277 killing process with pid 81895 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81895' 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81895 00:18:37.277 [2024-07-25 11:29:53.146656] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.277 11:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81895 00:18:37.844 [2024-07-25 11:29:53.468713] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:39.219 11:29:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.CxS58ZAcuB 00:18:39.219 11:29:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:39.219 11:29:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:39.219 11:29:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.40 00:18:39.219 11:29:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:18:39.219 11:29:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:39.219 11:29:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:39.219 11:29:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.40 != \0\.\0\0 ]] 00:18:39.219 00:18:39.219 real 0m10.154s 00:18:39.219 user 0m15.591s 00:18:39.219 sys 0m1.235s 00:18:39.219 11:29:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.219 ************************************ 00:18:39.219 END TEST raid_read_error_test 00:18:39.219 ************************************ 00:18:39.219 11:29:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.219 11:29:54 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:18:39.219 11:29:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:39.219 11:29:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.219 11:29:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:39.219 ************************************ 00:18:39.219 START TEST raid_write_error_test 00:18:39.219 ************************************ 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:39.219 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.BsFwBRXm7X 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=82110 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 82110 /var/tmp/spdk-raid.sock 00:18:39.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82110 ']' 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.220 11:29:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.220 [2024-07-25 11:29:55.074228] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:18:39.220 [2024-07-25 11:29:55.074403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82110 ] 00:18:39.478 [2024-07-25 11:29:55.240755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.736 [2024-07-25 11:29:55.518708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.994 [2024-07-25 11:29:55.751436] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.994 [2024-07-25 11:29:55.751554] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.298 11:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.298 11:29:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:40.298 11:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:40.298 11:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:40.557 BaseBdev1_malloc 00:18:40.557 11:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:40.814 true 00:18:40.814 11:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:41.090 [2024-07-25 11:29:56.838196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:41.090 [2024-07-25 11:29:56.838331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.090 [2024-07-25 11:29:56.838375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:41.090 [2024-07-25 11:29:56.838393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.090 [2024-07-25 11:29:56.841581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.090 [2024-07-25 11:29:56.841647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:41.090 BaseBdev1 00:18:41.090 11:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:41.090 11:29:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:41.367 BaseBdev2_malloc 00:18:41.367 11:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:41.625 true 00:18:41.625 11:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:41.882 [2024-07-25 11:29:57.665772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:41.882 [2024-07-25 11:29:57.666172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.882 [2024-07-25 11:29:57.666263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:41.882 [2024-07-25 11:29:57.666536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.882 [2024-07-25 11:29:57.669739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.883 [2024-07-25 11:29:57.669924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:41.883 BaseBdev2 00:18:41.883 11:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:41.883 11:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:42.141 BaseBdev3_malloc 00:18:42.141 11:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:42.399 true 00:18:42.399 11:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:42.657 [2024-07-25 11:29:58.459297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:42.657 [2024-07-25 11:29:58.459393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.657 [2024-07-25 11:29:58.459450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:42.657 [2024-07-25 11:29:58.459477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.657 [2024-07-25 11:29:58.462986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.657 [2024-07-25 11:29:58.463042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:42.657 BaseBdev3 00:18:42.657 11:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:42.657 11:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:43.223 BaseBdev4_malloc 00:18:43.223 11:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:43.481 true 00:18:43.481 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:43.481 [2024-07-25 11:29:59.355912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:43.481 [2024-07-25 11:29:59.356011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.481 [2024-07-25 11:29:59.356050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:43.481 [2024-07-25 11:29:59.356066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.481 [2024-07-25 11:29:59.358899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.481 [2024-07-25 11:29:59.358941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:43.481 BaseBdev4 00:18:43.739 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:43.998 [2024-07-25 11:29:59.644070] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.998 [2024-07-25 11:29:59.646478] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:43.998 [2024-07-25 11:29:59.646595] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:43.998 [2024-07-25 11:29:59.646719] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:43.998 [2024-07-25 11:29:59.647029] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:18:43.998 [2024-07-25 11:29:59.647048] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:43.998 [2024-07-25 11:29:59.647440] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:43.998 [2024-07-25 11:29:59.647876] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:18:43.998 [2024-07-25 11:29:59.647938] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:18:43.998 [2024-07-25 11:29:59.648352] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.998 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.256 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.256 "name": "raid_bdev1", 00:18:44.256 "uuid": "de113b60-007f-4a27-8a90-a809da904da9", 00:18:44.256 "strip_size_kb": 64, 00:18:44.256 "state": "online", 00:18:44.256 "raid_level": "concat", 00:18:44.256 "superblock": true, 00:18:44.256 "num_base_bdevs": 4, 00:18:44.256 "num_base_bdevs_discovered": 4, 00:18:44.256 "num_base_bdevs_operational": 4, 00:18:44.256 "base_bdevs_list": [ 00:18:44.256 { 00:18:44.256 "name": "BaseBdev1", 00:18:44.256 "uuid": "2dddb70d-303b-59a0-b7b8-2a1b96189e3b", 00:18:44.256 "is_configured": true, 00:18:44.256 "data_offset": 2048, 00:18:44.256 "data_size": 63488 00:18:44.256 }, 00:18:44.256 { 00:18:44.256 "name": "BaseBdev2", 00:18:44.256 "uuid": "063dab6e-35f0-5acb-a75b-8dca00d10a7d", 00:18:44.256 "is_configured": true, 00:18:44.256 "data_offset": 2048, 00:18:44.256 "data_size": 63488 00:18:44.256 }, 00:18:44.256 { 00:18:44.256 "name": "BaseBdev3", 00:18:44.256 "uuid": "26211b86-a464-5f9b-9693-eed611d42afe", 00:18:44.256 "is_configured": true, 00:18:44.256 "data_offset": 2048, 00:18:44.256 "data_size": 63488 00:18:44.256 }, 00:18:44.256 { 00:18:44.256 "name": "BaseBdev4", 00:18:44.256 "uuid": "a3321060-6c37-52bd-86d8-26c0aad6ca81", 00:18:44.256 "is_configured": true, 00:18:44.256 "data_offset": 2048, 00:18:44.256 "data_size": 63488 00:18:44.256 } 00:18:44.256 ] 00:18:44.256 }' 00:18:44.256 11:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.256 11:29:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.839 11:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:44.839 11:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:45.097 [2024-07-25 11:30:00.733961] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.030 11:30:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.288 11:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.288 "name": "raid_bdev1", 00:18:46.288 "uuid": "de113b60-007f-4a27-8a90-a809da904da9", 00:18:46.288 "strip_size_kb": 64, 00:18:46.288 "state": "online", 00:18:46.288 "raid_level": "concat", 00:18:46.288 "superblock": true, 00:18:46.288 "num_base_bdevs": 4, 00:18:46.288 "num_base_bdevs_discovered": 4, 00:18:46.288 "num_base_bdevs_operational": 4, 00:18:46.288 "base_bdevs_list": [ 00:18:46.288 { 00:18:46.288 "name": "BaseBdev1", 00:18:46.288 "uuid": "2dddb70d-303b-59a0-b7b8-2a1b96189e3b", 00:18:46.288 "is_configured": true, 00:18:46.288 "data_offset": 2048, 00:18:46.288 "data_size": 63488 00:18:46.288 }, 00:18:46.288 { 00:18:46.288 "name": "BaseBdev2", 00:18:46.288 "uuid": "063dab6e-35f0-5acb-a75b-8dca00d10a7d", 00:18:46.288 "is_configured": true, 00:18:46.288 "data_offset": 2048, 00:18:46.288 "data_size": 63488 00:18:46.288 }, 00:18:46.288 { 00:18:46.288 "name": "BaseBdev3", 00:18:46.288 "uuid": "26211b86-a464-5f9b-9693-eed611d42afe", 00:18:46.288 "is_configured": true, 00:18:46.288 "data_offset": 2048, 00:18:46.288 "data_size": 63488 00:18:46.288 }, 00:18:46.288 { 00:18:46.288 "name": "BaseBdev4", 00:18:46.288 "uuid": "a3321060-6c37-52bd-86d8-26c0aad6ca81", 00:18:46.288 "is_configured": true, 00:18:46.288 "data_offset": 2048, 00:18:46.288 "data_size": 63488 00:18:46.288 } 00:18:46.288 ] 00:18:46.288 }' 00:18:46.288 11:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.288 11:30:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.220 11:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:47.220 [2024-07-25 11:30:03.099123] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.220 [2024-07-25 11:30:03.099168] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.478 [2024-07-25 11:30:03.102294] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.478 [2024-07-25 11:30:03.102366] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.478 [2024-07-25 11:30:03.102433] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.478 [2024-07-25 11:30:03.102449] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:18:47.478 0 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 82110 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82110 ']' 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82110 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82110 00:18:47.478 killing process with pid 82110 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82110' 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82110 00:18:47.478 11:30:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82110 00:18:47.478 [2024-07-25 11:30:03.155055] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.735 [2024-07-25 11:30:03.441998] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.110 11:30:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:49.110 11:30:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.BsFwBRXm7X 00:18:49.110 11:30:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:49.110 11:30:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.42 00:18:49.110 11:30:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:18:49.110 ************************************ 00:18:49.110 END TEST raid_write_error_test 00:18:49.110 ************************************ 00:18:49.110 11:30:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:49.110 11:30:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:49.110 11:30:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.42 != \0\.\0\0 ]] 00:18:49.110 00:18:49.110 real 0m9.698s 00:18:49.110 user 0m14.990s 00:18:49.110 sys 0m1.225s 00:18:49.110 11:30:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.110 11:30:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.110 11:30:04 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:18:49.110 11:30:04 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:49.110 11:30:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:49.110 11:30:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.110 11:30:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.110 ************************************ 00:18:49.110 START TEST raid_state_function_test 00:18:49.110 ************************************ 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:18:49.110 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=82319 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 82319' 00:18:49.111 Process raid pid: 82319 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 82319 /var/tmp/spdk-raid.sock 00:18:49.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82319 ']' 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.111 11:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.111 [2024-07-25 11:30:04.831725] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:18:49.111 [2024-07-25 11:30:04.831894] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.369 [2024-07-25 11:30:05.005098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.369 [2024-07-25 11:30:05.246422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.633 [2024-07-25 11:30:05.451176] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.633 [2024-07-25 11:30:05.451218] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.905 11:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:49.905 11:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:49.905 11:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:50.164 [2024-07-25 11:30:06.036091] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.164 [2024-07-25 11:30:06.036170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.164 [2024-07-25 11:30:06.036190] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.164 [2024-07-25 11:30:06.036203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.164 [2024-07-25 11:30:06.036217] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.164 [2024-07-25 11:30:06.036229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.164 [2024-07-25 11:30:06.036241] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:50.164 [2024-07-25 11:30:06.036252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.422 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.680 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:50.681 "name": "Existed_Raid", 00:18:50.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.681 "strip_size_kb": 0, 00:18:50.681 "state": "configuring", 00:18:50.681 "raid_level": "raid1", 00:18:50.681 "superblock": false, 00:18:50.681 "num_base_bdevs": 4, 00:18:50.681 "num_base_bdevs_discovered": 0, 00:18:50.681 "num_base_bdevs_operational": 4, 00:18:50.681 "base_bdevs_list": [ 00:18:50.681 { 00:18:50.681 "name": "BaseBdev1", 00:18:50.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.681 "is_configured": false, 00:18:50.681 "data_offset": 0, 00:18:50.681 "data_size": 0 00:18:50.681 }, 00:18:50.681 { 00:18:50.681 "name": "BaseBdev2", 00:18:50.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.681 "is_configured": false, 00:18:50.681 "data_offset": 0, 00:18:50.681 "data_size": 0 00:18:50.681 }, 00:18:50.681 { 00:18:50.681 "name": "BaseBdev3", 00:18:50.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.681 "is_configured": false, 00:18:50.681 "data_offset": 0, 00:18:50.681 "data_size": 0 00:18:50.681 }, 00:18:50.681 { 00:18:50.681 "name": "BaseBdev4", 00:18:50.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.681 "is_configured": false, 00:18:50.681 "data_offset": 0, 00:18:50.681 "data_size": 0 00:18:50.681 } 00:18:50.681 ] 00:18:50.681 }' 00:18:50.681 11:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:50.681 11:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.246 11:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:51.504 [2024-07-25 11:30:07.364287] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:51.504 [2024-07-25 11:30:07.364339] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:51.504 11:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:51.761 [2024-07-25 11:30:07.592382] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:51.761 [2024-07-25 11:30:07.592453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:51.761 [2024-07-25 11:30:07.592473] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:51.761 [2024-07-25 11:30:07.592486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:51.761 [2024-07-25 11:30:07.592499] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:51.761 [2024-07-25 11:30:07.592510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:51.761 [2024-07-25 11:30:07.592522] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:51.761 [2024-07-25 11:30:07.592533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:51.761 11:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:52.019 [2024-07-25 11:30:07.884753] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.019 BaseBdev1 00:18:52.277 11:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:52.278 11:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:52.278 11:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:52.278 11:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:52.278 11:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:52.278 11:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:52.278 11:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.278 11:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:52.536 [ 00:18:52.536 { 00:18:52.536 "name": "BaseBdev1", 00:18:52.536 "aliases": [ 00:18:52.536 "ca9e11f1-3a75-4fad-a00b-e75a367662f3" 00:18:52.536 ], 00:18:52.536 "product_name": "Malloc disk", 00:18:52.536 "block_size": 512, 00:18:52.536 "num_blocks": 65536, 00:18:52.536 "uuid": "ca9e11f1-3a75-4fad-a00b-e75a367662f3", 00:18:52.536 "assigned_rate_limits": { 00:18:52.536 "rw_ios_per_sec": 0, 00:18:52.536 "rw_mbytes_per_sec": 0, 00:18:52.536 "r_mbytes_per_sec": 0, 00:18:52.536 "w_mbytes_per_sec": 0 00:18:52.536 }, 00:18:52.536 "claimed": true, 00:18:52.536 "claim_type": "exclusive_write", 00:18:52.536 "zoned": false, 00:18:52.536 "supported_io_types": { 00:18:52.536 "read": true, 00:18:52.536 "write": true, 00:18:52.536 "unmap": true, 00:18:52.536 "flush": true, 00:18:52.536 "reset": true, 00:18:52.536 "nvme_admin": false, 00:18:52.536 "nvme_io": false, 00:18:52.536 "nvme_io_md": false, 00:18:52.536 "write_zeroes": true, 00:18:52.536 "zcopy": true, 00:18:52.536 "get_zone_info": false, 00:18:52.536 "zone_management": false, 00:18:52.536 "zone_append": false, 00:18:52.536 "compare": false, 00:18:52.536 "compare_and_write": false, 00:18:52.536 "abort": true, 00:18:52.536 "seek_hole": false, 00:18:52.536 "seek_data": false, 00:18:52.536 "copy": true, 00:18:52.536 "nvme_iov_md": false 00:18:52.536 }, 00:18:52.536 "memory_domains": [ 00:18:52.536 { 00:18:52.536 "dma_device_id": "system", 00:18:52.536 "dma_device_type": 1 00:18:52.536 }, 00:18:52.536 { 00:18:52.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.536 "dma_device_type": 2 00:18:52.536 } 00:18:52.536 ], 00:18:52.536 "driver_specific": {} 00:18:52.536 } 00:18:52.536 ] 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.536 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.794 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.794 "name": "Existed_Raid", 00:18:52.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.794 "strip_size_kb": 0, 00:18:52.794 "state": "configuring", 00:18:52.794 "raid_level": "raid1", 00:18:52.794 "superblock": false, 00:18:52.794 "num_base_bdevs": 4, 00:18:52.794 "num_base_bdevs_discovered": 1, 00:18:52.794 "num_base_bdevs_operational": 4, 00:18:52.794 "base_bdevs_list": [ 00:18:52.794 { 00:18:52.794 "name": "BaseBdev1", 00:18:52.794 "uuid": "ca9e11f1-3a75-4fad-a00b-e75a367662f3", 00:18:52.794 "is_configured": true, 00:18:52.794 "data_offset": 0, 00:18:52.794 "data_size": 65536 00:18:52.794 }, 00:18:52.794 { 00:18:52.794 "name": "BaseBdev2", 00:18:52.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.794 "is_configured": false, 00:18:52.794 "data_offset": 0, 00:18:52.794 "data_size": 0 00:18:52.794 }, 00:18:52.794 { 00:18:52.794 "name": "BaseBdev3", 00:18:52.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.794 "is_configured": false, 00:18:52.794 "data_offset": 0, 00:18:52.794 "data_size": 0 00:18:52.794 }, 00:18:52.794 { 00:18:52.794 "name": "BaseBdev4", 00:18:52.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.794 "is_configured": false, 00:18:52.794 "data_offset": 0, 00:18:52.794 "data_size": 0 00:18:52.794 } 00:18:52.794 ] 00:18:52.794 }' 00:18:52.794 11:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.794 11:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.747 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:53.747 [2024-07-25 11:30:09.505371] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:53.747 [2024-07-25 11:30:09.505497] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:53.747 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:54.005 [2024-07-25 11:30:09.777505] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.005 [2024-07-25 11:30:09.780224] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.005 [2024-07-25 11:30:09.780300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.005 [2024-07-25 11:30:09.780321] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:54.005 [2024-07-25 11:30:09.780336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:54.005 [2024-07-25 11:30:09.780353] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:54.005 [2024-07-25 11:30:09.780365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.005 11:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.263 11:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.263 "name": "Existed_Raid", 00:18:54.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.263 "strip_size_kb": 0, 00:18:54.263 "state": "configuring", 00:18:54.263 "raid_level": "raid1", 00:18:54.263 "superblock": false, 00:18:54.263 "num_base_bdevs": 4, 00:18:54.263 "num_base_bdevs_discovered": 1, 00:18:54.263 "num_base_bdevs_operational": 4, 00:18:54.263 "base_bdevs_list": [ 00:18:54.263 { 00:18:54.263 "name": "BaseBdev1", 00:18:54.263 "uuid": "ca9e11f1-3a75-4fad-a00b-e75a367662f3", 00:18:54.263 "is_configured": true, 00:18:54.263 "data_offset": 0, 00:18:54.263 "data_size": 65536 00:18:54.263 }, 00:18:54.263 { 00:18:54.263 "name": "BaseBdev2", 00:18:54.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.263 "is_configured": false, 00:18:54.263 "data_offset": 0, 00:18:54.263 "data_size": 0 00:18:54.263 }, 00:18:54.263 { 00:18:54.263 "name": "BaseBdev3", 00:18:54.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.263 "is_configured": false, 00:18:54.263 "data_offset": 0, 00:18:54.263 "data_size": 0 00:18:54.263 }, 00:18:54.263 { 00:18:54.263 "name": "BaseBdev4", 00:18:54.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.263 "is_configured": false, 00:18:54.263 "data_offset": 0, 00:18:54.263 "data_size": 0 00:18:54.263 } 00:18:54.263 ] 00:18:54.263 }' 00:18:54.263 11:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.263 11:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.194 11:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:55.194 [2024-07-25 11:30:11.059750] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:55.194 BaseBdev2 00:18:55.453 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:55.453 11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:55.453 11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:55.453 11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:55.453 11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:55.453 11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:55.453 11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:55.453 11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:55.711 [ 00:18:55.711 { 00:18:55.711 "name": "BaseBdev2", 00:18:55.711 "aliases": [ 00:18:55.711 "30281078-f4ba-43b2-ae09-72044625b3ed" 00:18:55.711 ], 00:18:55.711 "product_name": "Malloc disk", 00:18:55.711 "block_size": 512, 00:18:55.711 "num_blocks": 65536, 00:18:55.711 "uuid": "30281078-f4ba-43b2-ae09-72044625b3ed", 00:18:55.711 "assigned_rate_limits": { 00:18:55.711 "rw_ios_per_sec": 0, 00:18:55.711 "rw_mbytes_per_sec": 0, 00:18:55.711 "r_mbytes_per_sec": 0, 00:18:55.711 "w_mbytes_per_sec": 0 00:18:55.711 }, 00:18:55.711 "claimed": true, 00:18:55.711 "claim_type": "exclusive_write", 00:18:55.711 "zoned": false, 00:18:55.711 "supported_io_types": { 00:18:55.711 "read": true, 00:18:55.711 "write": true, 00:18:55.711 "unmap": true, 00:18:55.711 "flush": true, 00:18:55.711 "reset": true, 00:18:55.711 "nvme_admin": false, 00:18:55.711 "nvme_io": false, 00:18:55.711 "nvme_io_md": false, 00:18:55.711 "write_zeroes": true, 00:18:55.711 "zcopy": true, 00:18:55.711 "get_zone_info": false, 00:18:55.711 "zone_management": false, 00:18:55.711 "zone_append": false, 00:18:55.711 "compare": false, 00:18:55.711 "compare_and_write": false, 00:18:55.711 "abort": true, 00:18:55.712 "seek_hole": false, 00:18:55.712 "seek_data": false, 00:18:55.712 "copy": true, 00:18:55.712 "nvme_iov_md": false 00:18:55.712 }, 00:18:55.712 "memory_domains": [ 00:18:55.712 { 00:18:55.712 "dma_device_id": "system", 00:18:55.712 "dma_device_type": 1 00:18:55.712 }, 00:18:55.712 { 00:18:55.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.712 "dma_device_type": 2 00:18:55.712 } 00:18:55.712 ], 00:18:55.712 "driver_specific": {} 00:18:55.712 } 00:18:55.712 ] 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.712 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.970 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.970 "name": "Existed_Raid", 00:18:55.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.970 "strip_size_kb": 0, 00:18:55.970 "state": "configuring", 00:18:55.970 "raid_level": "raid1", 00:18:55.970 "superblock": false, 00:18:55.970 "num_base_bdevs": 4, 00:18:55.970 "num_base_bdevs_discovered": 2, 00:18:55.970 "num_base_bdevs_operational": 4, 00:18:55.970 "base_bdevs_list": [ 00:18:55.970 { 00:18:55.970 "name": "BaseBdev1", 00:18:55.970 "uuid": "ca9e11f1-3a75-4fad-a00b-e75a367662f3", 00:18:55.970 "is_configured": true, 00:18:55.970 "data_offset": 0, 00:18:55.970 "data_size": 65536 00:18:55.970 }, 00:18:55.970 { 00:18:55.970 "name": "BaseBdev2", 00:18:55.970 "uuid": "30281078-f4ba-43b2-ae09-72044625b3ed", 00:18:55.970 "is_configured": true, 00:18:55.970 "data_offset": 0, 00:18:55.970 "data_size": 65536 00:18:55.970 }, 00:18:55.970 { 00:18:55.970 "name": "BaseBdev3", 00:18:55.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.970 "is_configured": false, 00:18:55.970 "data_offset": 0, 00:18:55.970 "data_size": 0 00:18:55.970 }, 00:18:55.970 { 00:18:55.970 "name": "BaseBdev4", 00:18:55.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.971 "is_configured": false, 00:18:55.971 "data_offset": 0, 00:18:55.971 "data_size": 0 00:18:55.971 } 00:18:55.971 ] 00:18:55.971 }' 00:18:55.971 11:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.971 11:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.535 11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:56.894 [2024-07-25 11:30:12.650250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:56.894 BaseBdev3 00:18:56.894 11:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:56.894 11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:56.894 11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:56.894 11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:56.894 11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:56.894 11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:56.894 11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:57.152 11:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:57.411 [ 00:18:57.411 { 00:18:57.411 "name": "BaseBdev3", 00:18:57.411 "aliases": [ 00:18:57.411 "10e6e48a-13f5-4b03-846c-c73dc199c94a" 00:18:57.411 ], 00:18:57.411 "product_name": "Malloc disk", 00:18:57.411 "block_size": 512, 00:18:57.411 "num_blocks": 65536, 00:18:57.411 "uuid": "10e6e48a-13f5-4b03-846c-c73dc199c94a", 00:18:57.411 "assigned_rate_limits": { 00:18:57.411 "rw_ios_per_sec": 0, 00:18:57.411 "rw_mbytes_per_sec": 0, 00:18:57.411 "r_mbytes_per_sec": 0, 00:18:57.411 "w_mbytes_per_sec": 0 00:18:57.411 }, 00:18:57.411 "claimed": true, 00:18:57.411 "claim_type": "exclusive_write", 00:18:57.411 "zoned": false, 00:18:57.411 "supported_io_types": { 00:18:57.411 "read": true, 00:18:57.411 "write": true, 00:18:57.411 "unmap": true, 00:18:57.411 "flush": true, 00:18:57.411 "reset": true, 00:18:57.411 "nvme_admin": false, 00:18:57.411 "nvme_io": false, 00:18:57.411 "nvme_io_md": false, 00:18:57.411 "write_zeroes": true, 00:18:57.411 "zcopy": true, 00:18:57.411 "get_zone_info": false, 00:18:57.411 "zone_management": false, 00:18:57.411 "zone_append": false, 00:18:57.411 "compare": false, 00:18:57.411 "compare_and_write": false, 00:18:57.411 "abort": true, 00:18:57.411 "seek_hole": false, 00:18:57.411 "seek_data": false, 00:18:57.411 "copy": true, 00:18:57.411 "nvme_iov_md": false 00:18:57.411 }, 00:18:57.411 "memory_domains": [ 00:18:57.411 { 00:18:57.411 "dma_device_id": "system", 00:18:57.411 "dma_device_type": 1 00:18:57.411 }, 00:18:57.411 { 00:18:57.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.411 "dma_device_type": 2 00:18:57.411 } 00:18:57.411 ], 00:18:57.411 "driver_specific": {} 00:18:57.411 } 00:18:57.411 ] 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.411 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.669 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.669 "name": "Existed_Raid", 00:18:57.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.669 "strip_size_kb": 0, 00:18:57.669 "state": "configuring", 00:18:57.669 "raid_level": "raid1", 00:18:57.669 "superblock": false, 00:18:57.669 "num_base_bdevs": 4, 00:18:57.669 "num_base_bdevs_discovered": 3, 00:18:57.669 "num_base_bdevs_operational": 4, 00:18:57.669 "base_bdevs_list": [ 00:18:57.669 { 00:18:57.669 "name": "BaseBdev1", 00:18:57.669 "uuid": "ca9e11f1-3a75-4fad-a00b-e75a367662f3", 00:18:57.669 "is_configured": true, 00:18:57.669 "data_offset": 0, 00:18:57.669 "data_size": 65536 00:18:57.669 }, 00:18:57.669 { 00:18:57.669 "name": "BaseBdev2", 00:18:57.669 "uuid": "30281078-f4ba-43b2-ae09-72044625b3ed", 00:18:57.669 "is_configured": true, 00:18:57.670 "data_offset": 0, 00:18:57.670 "data_size": 65536 00:18:57.670 }, 00:18:57.670 { 00:18:57.670 "name": "BaseBdev3", 00:18:57.670 "uuid": "10e6e48a-13f5-4b03-846c-c73dc199c94a", 00:18:57.670 "is_configured": true, 00:18:57.670 "data_offset": 0, 00:18:57.670 "data_size": 65536 00:18:57.670 }, 00:18:57.670 { 00:18:57.670 "name": "BaseBdev4", 00:18:57.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.670 "is_configured": false, 00:18:57.670 "data_offset": 0, 00:18:57.670 "data_size": 0 00:18:57.670 } 00:18:57.670 ] 00:18:57.670 }' 00:18:57.670 11:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.670 11:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.604 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:58.604 [2024-07-25 11:30:14.380940] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:58.604 [2024-07-25 11:30:14.381014] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:58.604 [2024-07-25 11:30:14.381030] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:58.604 [2024-07-25 11:30:14.381368] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:58.604 [2024-07-25 11:30:14.381586] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:58.604 [2024-07-25 11:30:14.381603] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:58.604 [2024-07-25 11:30:14.381935] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.604 BaseBdev4 00:18:58.604 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:18:58.604 11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:58.604 11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:58.604 11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:58.604 11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:58.604 11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:58.604 11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:58.862 11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:59.121 [ 00:18:59.121 { 00:18:59.121 "name": "BaseBdev4", 00:18:59.121 "aliases": [ 00:18:59.121 "fb975279-d089-4d15-9d02-ac3a78a2c8be" 00:18:59.121 ], 00:18:59.121 "product_name": "Malloc disk", 00:18:59.121 "block_size": 512, 00:18:59.121 "num_blocks": 65536, 00:18:59.121 "uuid": "fb975279-d089-4d15-9d02-ac3a78a2c8be", 00:18:59.121 "assigned_rate_limits": { 00:18:59.121 "rw_ios_per_sec": 0, 00:18:59.121 "rw_mbytes_per_sec": 0, 00:18:59.121 "r_mbytes_per_sec": 0, 00:18:59.121 "w_mbytes_per_sec": 0 00:18:59.121 }, 00:18:59.121 "claimed": true, 00:18:59.121 "claim_type": "exclusive_write", 00:18:59.121 "zoned": false, 00:18:59.121 "supported_io_types": { 00:18:59.121 "read": true, 00:18:59.121 "write": true, 00:18:59.121 "unmap": true, 00:18:59.121 "flush": true, 00:18:59.121 "reset": true, 00:18:59.121 "nvme_admin": false, 00:18:59.121 "nvme_io": false, 00:18:59.121 "nvme_io_md": false, 00:18:59.121 "write_zeroes": true, 00:18:59.121 "zcopy": true, 00:18:59.121 "get_zone_info": false, 00:18:59.121 "zone_management": false, 00:18:59.121 "zone_append": false, 00:18:59.121 "compare": false, 00:18:59.121 "compare_and_write": false, 00:18:59.121 "abort": true, 00:18:59.121 "seek_hole": false, 00:18:59.121 "seek_data": false, 00:18:59.121 "copy": true, 00:18:59.122 "nvme_iov_md": false 00:18:59.122 }, 00:18:59.122 "memory_domains": [ 00:18:59.122 { 00:18:59.122 "dma_device_id": "system", 00:18:59.122 "dma_device_type": 1 00:18:59.122 }, 00:18:59.122 { 00:18:59.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.122 "dma_device_type": 2 00:18:59.122 } 00:18:59.122 ], 00:18:59.122 "driver_specific": {} 00:18:59.122 } 00:18:59.122 ] 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.122 11:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.380 11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.380 "name": "Existed_Raid", 00:18:59.381 "uuid": "1cb9eab3-1454-47a5-bb79-e8017bbaa0e7", 00:18:59.381 "strip_size_kb": 0, 00:18:59.381 "state": "online", 00:18:59.381 "raid_level": "raid1", 00:18:59.381 "superblock": false, 00:18:59.381 "num_base_bdevs": 4, 00:18:59.381 "num_base_bdevs_discovered": 4, 00:18:59.381 "num_base_bdevs_operational": 4, 00:18:59.381 "base_bdevs_list": [ 00:18:59.381 { 00:18:59.381 "name": "BaseBdev1", 00:18:59.381 "uuid": "ca9e11f1-3a75-4fad-a00b-e75a367662f3", 00:18:59.381 "is_configured": true, 00:18:59.381 "data_offset": 0, 00:18:59.381 "data_size": 65536 00:18:59.381 }, 00:18:59.381 { 00:18:59.381 "name": "BaseBdev2", 00:18:59.381 "uuid": "30281078-f4ba-43b2-ae09-72044625b3ed", 00:18:59.381 "is_configured": true, 00:18:59.381 "data_offset": 0, 00:18:59.381 "data_size": 65536 00:18:59.381 }, 00:18:59.381 { 00:18:59.381 "name": "BaseBdev3", 00:18:59.381 "uuid": "10e6e48a-13f5-4b03-846c-c73dc199c94a", 00:18:59.381 "is_configured": true, 00:18:59.381 "data_offset": 0, 00:18:59.381 "data_size": 65536 00:18:59.381 }, 00:18:59.381 { 00:18:59.381 "name": "BaseBdev4", 00:18:59.381 "uuid": "fb975279-d089-4d15-9d02-ac3a78a2c8be", 00:18:59.381 "is_configured": true, 00:18:59.381 "data_offset": 0, 00:18:59.381 "data_size": 65536 00:18:59.381 } 00:18:59.381 ] 00:18:59.381 }' 00:18:59.381 11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.381 11:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.947 11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:59.947 11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:59.947 11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:59.947 11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:59.947 11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:59.947 11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:59.947 11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:59.947 11:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:00.204 [2024-07-25 11:30:15.989831] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:00.204 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:00.204 "name": "Existed_Raid", 00:19:00.204 "aliases": [ 00:19:00.204 "1cb9eab3-1454-47a5-bb79-e8017bbaa0e7" 00:19:00.204 ], 00:19:00.204 "product_name": "Raid Volume", 00:19:00.204 "block_size": 512, 00:19:00.204 "num_blocks": 65536, 00:19:00.204 "uuid": "1cb9eab3-1454-47a5-bb79-e8017bbaa0e7", 00:19:00.204 "assigned_rate_limits": { 00:19:00.204 "rw_ios_per_sec": 0, 00:19:00.204 "rw_mbytes_per_sec": 0, 00:19:00.205 "r_mbytes_per_sec": 0, 00:19:00.205 "w_mbytes_per_sec": 0 00:19:00.205 }, 00:19:00.205 "claimed": false, 00:19:00.205 "zoned": false, 00:19:00.205 "supported_io_types": { 00:19:00.205 "read": true, 00:19:00.205 "write": true, 00:19:00.205 "unmap": false, 00:19:00.205 "flush": false, 00:19:00.205 "reset": true, 00:19:00.205 "nvme_admin": false, 00:19:00.205 "nvme_io": false, 00:19:00.205 "nvme_io_md": false, 00:19:00.205 "write_zeroes": true, 00:19:00.205 "zcopy": false, 00:19:00.205 "get_zone_info": false, 00:19:00.205 "zone_management": false, 00:19:00.205 "zone_append": false, 00:19:00.205 "compare": false, 00:19:00.205 "compare_and_write": false, 00:19:00.205 "abort": false, 00:19:00.205 "seek_hole": false, 00:19:00.205 "seek_data": false, 00:19:00.205 "copy": false, 00:19:00.205 "nvme_iov_md": false 00:19:00.205 }, 00:19:00.205 "memory_domains": [ 00:19:00.205 { 00:19:00.205 "dma_device_id": "system", 00:19:00.205 "dma_device_type": 1 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.205 "dma_device_type": 2 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "dma_device_id": "system", 00:19:00.205 "dma_device_type": 1 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.205 "dma_device_type": 2 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "dma_device_id": "system", 00:19:00.205 "dma_device_type": 1 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.205 "dma_device_type": 2 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "dma_device_id": "system", 00:19:00.205 "dma_device_type": 1 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.205 "dma_device_type": 2 00:19:00.205 } 00:19:00.205 ], 00:19:00.205 "driver_specific": { 00:19:00.205 "raid": { 00:19:00.205 "uuid": "1cb9eab3-1454-47a5-bb79-e8017bbaa0e7", 00:19:00.205 "strip_size_kb": 0, 00:19:00.205 "state": "online", 00:19:00.205 "raid_level": "raid1", 00:19:00.205 "superblock": false, 00:19:00.205 "num_base_bdevs": 4, 00:19:00.205 "num_base_bdevs_discovered": 4, 00:19:00.205 "num_base_bdevs_operational": 4, 00:19:00.205 "base_bdevs_list": [ 00:19:00.205 { 00:19:00.205 "name": "BaseBdev1", 00:19:00.205 "uuid": "ca9e11f1-3a75-4fad-a00b-e75a367662f3", 00:19:00.205 "is_configured": true, 00:19:00.205 "data_offset": 0, 00:19:00.205 "data_size": 65536 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "name": "BaseBdev2", 00:19:00.205 "uuid": "30281078-f4ba-43b2-ae09-72044625b3ed", 00:19:00.205 "is_configured": true, 00:19:00.205 "data_offset": 0, 00:19:00.205 "data_size": 65536 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "name": "BaseBdev3", 00:19:00.205 "uuid": "10e6e48a-13f5-4b03-846c-c73dc199c94a", 00:19:00.205 "is_configured": true, 00:19:00.205 "data_offset": 0, 00:19:00.205 "data_size": 65536 00:19:00.205 }, 00:19:00.205 { 00:19:00.205 "name": "BaseBdev4", 00:19:00.205 "uuid": "fb975279-d089-4d15-9d02-ac3a78a2c8be", 00:19:00.205 "is_configured": true, 00:19:00.205 "data_offset": 0, 00:19:00.205 "data_size": 65536 00:19:00.205 } 00:19:00.205 ] 00:19:00.205 } 00:19:00.205 } 00:19:00.205 }' 00:19:00.205 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:00.205 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:00.205 BaseBdev2 00:19:00.205 BaseBdev3 00:19:00.205 BaseBdev4' 00:19:00.205 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:00.205 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:00.205 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:00.771 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:00.771 "name": "BaseBdev1", 00:19:00.771 "aliases": [ 00:19:00.771 "ca9e11f1-3a75-4fad-a00b-e75a367662f3" 00:19:00.771 ], 00:19:00.771 "product_name": "Malloc disk", 00:19:00.771 "block_size": 512, 00:19:00.771 "num_blocks": 65536, 00:19:00.771 "uuid": "ca9e11f1-3a75-4fad-a00b-e75a367662f3", 00:19:00.771 "assigned_rate_limits": { 00:19:00.771 "rw_ios_per_sec": 0, 00:19:00.771 "rw_mbytes_per_sec": 0, 00:19:00.771 "r_mbytes_per_sec": 0, 00:19:00.771 "w_mbytes_per_sec": 0 00:19:00.771 }, 00:19:00.771 "claimed": true, 00:19:00.771 "claim_type": "exclusive_write", 00:19:00.771 "zoned": false, 00:19:00.771 "supported_io_types": { 00:19:00.771 "read": true, 00:19:00.771 "write": true, 00:19:00.771 "unmap": true, 00:19:00.771 "flush": true, 00:19:00.771 "reset": true, 00:19:00.771 "nvme_admin": false, 00:19:00.771 "nvme_io": false, 00:19:00.771 "nvme_io_md": false, 00:19:00.771 "write_zeroes": true, 00:19:00.771 "zcopy": true, 00:19:00.771 "get_zone_info": false, 00:19:00.771 "zone_management": false, 00:19:00.771 "zone_append": false, 00:19:00.771 "compare": false, 00:19:00.771 "compare_and_write": false, 00:19:00.771 "abort": true, 00:19:00.771 "seek_hole": false, 00:19:00.771 "seek_data": false, 00:19:00.771 "copy": true, 00:19:00.771 "nvme_iov_md": false 00:19:00.771 }, 00:19:00.771 "memory_domains": [ 00:19:00.771 { 00:19:00.771 "dma_device_id": "system", 00:19:00.771 "dma_device_type": 1 00:19:00.771 }, 00:19:00.771 { 00:19:00.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.771 "dma_device_type": 2 00:19:00.771 } 00:19:00.771 ], 00:19:00.771 "driver_specific": {} 00:19:00.771 }' 00:19:00.771 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.771 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:00.771 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:00.771 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.771 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:00.771 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:00.771 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:01.029 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:01.029 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:01.029 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:01.029 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:01.029 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:01.029 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:01.029 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:01.029 11:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:01.286 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:01.286 "name": "BaseBdev2", 00:19:01.286 "aliases": [ 00:19:01.286 "30281078-f4ba-43b2-ae09-72044625b3ed" 00:19:01.286 ], 00:19:01.286 "product_name": "Malloc disk", 00:19:01.286 "block_size": 512, 00:19:01.286 "num_blocks": 65536, 00:19:01.286 "uuid": "30281078-f4ba-43b2-ae09-72044625b3ed", 00:19:01.286 "assigned_rate_limits": { 00:19:01.286 "rw_ios_per_sec": 0, 00:19:01.286 "rw_mbytes_per_sec": 0, 00:19:01.286 "r_mbytes_per_sec": 0, 00:19:01.286 "w_mbytes_per_sec": 0 00:19:01.286 }, 00:19:01.286 "claimed": true, 00:19:01.286 "claim_type": "exclusive_write", 00:19:01.286 "zoned": false, 00:19:01.286 "supported_io_types": { 00:19:01.286 "read": true, 00:19:01.286 "write": true, 00:19:01.286 "unmap": true, 00:19:01.286 "flush": true, 00:19:01.286 "reset": true, 00:19:01.286 "nvme_admin": false, 00:19:01.286 "nvme_io": false, 00:19:01.286 "nvme_io_md": false, 00:19:01.286 "write_zeroes": true, 00:19:01.286 "zcopy": true, 00:19:01.286 "get_zone_info": false, 00:19:01.286 "zone_management": false, 00:19:01.286 "zone_append": false, 00:19:01.286 "compare": false, 00:19:01.286 "compare_and_write": false, 00:19:01.286 "abort": true, 00:19:01.286 "seek_hole": false, 00:19:01.286 "seek_data": false, 00:19:01.286 "copy": true, 00:19:01.286 "nvme_iov_md": false 00:19:01.286 }, 00:19:01.286 "memory_domains": [ 00:19:01.286 { 00:19:01.286 "dma_device_id": "system", 00:19:01.286 "dma_device_type": 1 00:19:01.286 }, 00:19:01.286 { 00:19:01.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.286 "dma_device_type": 2 00:19:01.286 } 00:19:01.286 ], 00:19:01.286 "driver_specific": {} 00:19:01.286 }' 00:19:01.286 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:01.286 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:01.544 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:01.544 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:01.544 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:01.544 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:01.544 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:01.544 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:01.544 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:01.544 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:01.801 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:01.801 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:01.801 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:01.801 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:01.801 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:02.059 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:02.059 "name": "BaseBdev3", 00:19:02.059 "aliases": [ 00:19:02.059 "10e6e48a-13f5-4b03-846c-c73dc199c94a" 00:19:02.059 ], 00:19:02.059 "product_name": "Malloc disk", 00:19:02.059 "block_size": 512, 00:19:02.059 "num_blocks": 65536, 00:19:02.059 "uuid": "10e6e48a-13f5-4b03-846c-c73dc199c94a", 00:19:02.059 "assigned_rate_limits": { 00:19:02.059 "rw_ios_per_sec": 0, 00:19:02.059 "rw_mbytes_per_sec": 0, 00:19:02.059 "r_mbytes_per_sec": 0, 00:19:02.059 "w_mbytes_per_sec": 0 00:19:02.059 }, 00:19:02.059 "claimed": true, 00:19:02.059 "claim_type": "exclusive_write", 00:19:02.059 "zoned": false, 00:19:02.059 "supported_io_types": { 00:19:02.059 "read": true, 00:19:02.059 "write": true, 00:19:02.059 "unmap": true, 00:19:02.059 "flush": true, 00:19:02.059 "reset": true, 00:19:02.059 "nvme_admin": false, 00:19:02.059 "nvme_io": false, 00:19:02.059 "nvme_io_md": false, 00:19:02.059 "write_zeroes": true, 00:19:02.059 "zcopy": true, 00:19:02.059 "get_zone_info": false, 00:19:02.059 "zone_management": false, 00:19:02.059 "zone_append": false, 00:19:02.059 "compare": false, 00:19:02.059 "compare_and_write": false, 00:19:02.059 "abort": true, 00:19:02.059 "seek_hole": false, 00:19:02.059 "seek_data": false, 00:19:02.059 "copy": true, 00:19:02.059 "nvme_iov_md": false 00:19:02.059 }, 00:19:02.059 "memory_domains": [ 00:19:02.059 { 00:19:02.059 "dma_device_id": "system", 00:19:02.059 "dma_device_type": 1 00:19:02.059 }, 00:19:02.059 { 00:19:02.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.059 "dma_device_type": 2 00:19:02.059 } 00:19:02.059 ], 00:19:02.059 "driver_specific": {} 00:19:02.059 }' 00:19:02.059 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:02.059 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:02.059 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:02.059 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:02.059 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:02.317 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:02.317 11:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:02.317 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:02.317 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:02.317 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:02.317 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:02.574 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:02.574 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:02.575 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:02.575 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:02.833 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:02.833 "name": "BaseBdev4", 00:19:02.833 "aliases": [ 00:19:02.833 "fb975279-d089-4d15-9d02-ac3a78a2c8be" 00:19:02.833 ], 00:19:02.833 "product_name": "Malloc disk", 00:19:02.833 "block_size": 512, 00:19:02.833 "num_blocks": 65536, 00:19:02.833 "uuid": "fb975279-d089-4d15-9d02-ac3a78a2c8be", 00:19:02.833 "assigned_rate_limits": { 00:19:02.833 "rw_ios_per_sec": 0, 00:19:02.833 "rw_mbytes_per_sec": 0, 00:19:02.833 "r_mbytes_per_sec": 0, 00:19:02.833 "w_mbytes_per_sec": 0 00:19:02.833 }, 00:19:02.833 "claimed": true, 00:19:02.833 "claim_type": "exclusive_write", 00:19:02.833 "zoned": false, 00:19:02.833 "supported_io_types": { 00:19:02.833 "read": true, 00:19:02.833 "write": true, 00:19:02.833 "unmap": true, 00:19:02.833 "flush": true, 00:19:02.833 "reset": true, 00:19:02.833 "nvme_admin": false, 00:19:02.833 "nvme_io": false, 00:19:02.833 "nvme_io_md": false, 00:19:02.833 "write_zeroes": true, 00:19:02.833 "zcopy": true, 00:19:02.833 "get_zone_info": false, 00:19:02.833 "zone_management": false, 00:19:02.833 "zone_append": false, 00:19:02.833 "compare": false, 00:19:02.833 "compare_and_write": false, 00:19:02.833 "abort": true, 00:19:02.833 "seek_hole": false, 00:19:02.833 "seek_data": false, 00:19:02.833 "copy": true, 00:19:02.833 "nvme_iov_md": false 00:19:02.833 }, 00:19:02.833 "memory_domains": [ 00:19:02.833 { 00:19:02.833 "dma_device_id": "system", 00:19:02.833 "dma_device_type": 1 00:19:02.833 }, 00:19:02.833 { 00:19:02.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.833 "dma_device_type": 2 00:19:02.833 } 00:19:02.833 ], 00:19:02.833 "driver_specific": {} 00:19:02.833 }' 00:19:02.833 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:02.833 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:02.833 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:02.833 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:02.833 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:02.833 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:02.833 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:03.094 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:03.094 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:03.094 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:03.094 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:03.095 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:03.095 11:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:03.352 [2024-07-25 11:30:19.162391] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.610 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.868 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.868 "name": "Existed_Raid", 00:19:03.868 "uuid": "1cb9eab3-1454-47a5-bb79-e8017bbaa0e7", 00:19:03.868 "strip_size_kb": 0, 00:19:03.868 "state": "online", 00:19:03.868 "raid_level": "raid1", 00:19:03.868 "superblock": false, 00:19:03.868 "num_base_bdevs": 4, 00:19:03.868 "num_base_bdevs_discovered": 3, 00:19:03.868 "num_base_bdevs_operational": 3, 00:19:03.868 "base_bdevs_list": [ 00:19:03.868 { 00:19:03.868 "name": null, 00:19:03.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.868 "is_configured": false, 00:19:03.868 "data_offset": 0, 00:19:03.868 "data_size": 65536 00:19:03.868 }, 00:19:03.868 { 00:19:03.868 "name": "BaseBdev2", 00:19:03.868 "uuid": "30281078-f4ba-43b2-ae09-72044625b3ed", 00:19:03.868 "is_configured": true, 00:19:03.868 "data_offset": 0, 00:19:03.868 "data_size": 65536 00:19:03.868 }, 00:19:03.868 { 00:19:03.868 "name": "BaseBdev3", 00:19:03.868 "uuid": "10e6e48a-13f5-4b03-846c-c73dc199c94a", 00:19:03.868 "is_configured": true, 00:19:03.868 "data_offset": 0, 00:19:03.868 "data_size": 65536 00:19:03.868 }, 00:19:03.868 { 00:19:03.868 "name": "BaseBdev4", 00:19:03.868 "uuid": "fb975279-d089-4d15-9d02-ac3a78a2c8be", 00:19:03.868 "is_configured": true, 00:19:03.868 "data_offset": 0, 00:19:03.868 "data_size": 65536 00:19:03.868 } 00:19:03.868 ] 00:19:03.868 }' 00:19:03.868 11:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.868 11:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.432 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:04.433 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:04.433 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.433 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:04.692 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:04.692 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:04.692 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:04.951 [2024-07-25 11:30:20.777633] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:05.209 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:05.209 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:05.209 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.209 11:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:05.466 11:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:05.466 11:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:05.466 11:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:05.724 [2024-07-25 11:30:21.420017] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:05.724 11:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:05.724 11:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:05.724 11:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.724 11:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:05.981 11:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:05.981 11:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:05.981 11:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:06.239 [2024-07-25 11:30:22.114137] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:06.239 [2024-07-25 11:30:22.114499] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.497 [2024-07-25 11:30:22.201688] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.497 [2024-07-25 11:30:22.202037] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.497 [2024-07-25 11:30:22.202066] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:06.497 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:06.497 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:06.497 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.497 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:06.755 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:06.755 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:06.755 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:06.755 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:06.755 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:06.755 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:07.013 BaseBdev2 00:19:07.013 11:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:07.013 11:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:07.013 11:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:07.013 11:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:07.013 11:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:07.013 11:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:07.013 11:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:07.613 11:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:07.870 [ 00:19:07.870 { 00:19:07.870 "name": "BaseBdev2", 00:19:07.870 "aliases": [ 00:19:07.870 "2f327f57-bde9-4a11-b28b-0a0b5d480090" 00:19:07.870 ], 00:19:07.870 "product_name": "Malloc disk", 00:19:07.870 "block_size": 512, 00:19:07.870 "num_blocks": 65536, 00:19:07.870 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:07.870 "assigned_rate_limits": { 00:19:07.870 "rw_ios_per_sec": 0, 00:19:07.870 "rw_mbytes_per_sec": 0, 00:19:07.870 "r_mbytes_per_sec": 0, 00:19:07.870 "w_mbytes_per_sec": 0 00:19:07.870 }, 00:19:07.870 "claimed": false, 00:19:07.870 "zoned": false, 00:19:07.870 "supported_io_types": { 00:19:07.870 "read": true, 00:19:07.871 "write": true, 00:19:07.871 "unmap": true, 00:19:07.871 "flush": true, 00:19:07.871 "reset": true, 00:19:07.871 "nvme_admin": false, 00:19:07.871 "nvme_io": false, 00:19:07.871 "nvme_io_md": false, 00:19:07.871 "write_zeroes": true, 00:19:07.871 "zcopy": true, 00:19:07.871 "get_zone_info": false, 00:19:07.871 "zone_management": false, 00:19:07.871 "zone_append": false, 00:19:07.871 "compare": false, 00:19:07.871 "compare_and_write": false, 00:19:07.871 "abort": true, 00:19:07.871 "seek_hole": false, 00:19:07.871 "seek_data": false, 00:19:07.871 "copy": true, 00:19:07.871 "nvme_iov_md": false 00:19:07.871 }, 00:19:07.871 "memory_domains": [ 00:19:07.871 { 00:19:07.871 "dma_device_id": "system", 00:19:07.871 "dma_device_type": 1 00:19:07.871 }, 00:19:07.871 { 00:19:07.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.871 "dma_device_type": 2 00:19:07.871 } 00:19:07.871 ], 00:19:07.871 "driver_specific": {} 00:19:07.871 } 00:19:07.871 ] 00:19:07.871 11:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:07.871 11:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:07.871 11:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:07.871 11:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:08.134 BaseBdev3 00:19:08.134 11:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:08.134 11:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:08.134 11:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:08.134 11:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:08.134 11:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:08.134 11:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:08.134 11:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:08.398 11:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:08.684 [ 00:19:08.684 { 00:19:08.684 "name": "BaseBdev3", 00:19:08.684 "aliases": [ 00:19:08.684 "667fac5e-4289-42c5-9c7a-02aa3e367a27" 00:19:08.684 ], 00:19:08.684 "product_name": "Malloc disk", 00:19:08.684 "block_size": 512, 00:19:08.684 "num_blocks": 65536, 00:19:08.684 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:08.684 "assigned_rate_limits": { 00:19:08.684 "rw_ios_per_sec": 0, 00:19:08.684 "rw_mbytes_per_sec": 0, 00:19:08.684 "r_mbytes_per_sec": 0, 00:19:08.684 "w_mbytes_per_sec": 0 00:19:08.685 }, 00:19:08.685 "claimed": false, 00:19:08.685 "zoned": false, 00:19:08.685 "supported_io_types": { 00:19:08.685 "read": true, 00:19:08.685 "write": true, 00:19:08.685 "unmap": true, 00:19:08.685 "flush": true, 00:19:08.685 "reset": true, 00:19:08.685 "nvme_admin": false, 00:19:08.685 "nvme_io": false, 00:19:08.685 "nvme_io_md": false, 00:19:08.685 "write_zeroes": true, 00:19:08.685 "zcopy": true, 00:19:08.685 "get_zone_info": false, 00:19:08.685 "zone_management": false, 00:19:08.685 "zone_append": false, 00:19:08.685 "compare": false, 00:19:08.685 "compare_and_write": false, 00:19:08.685 "abort": true, 00:19:08.685 "seek_hole": false, 00:19:08.685 "seek_data": false, 00:19:08.685 "copy": true, 00:19:08.685 "nvme_iov_md": false 00:19:08.685 }, 00:19:08.685 "memory_domains": [ 00:19:08.685 { 00:19:08.685 "dma_device_id": "system", 00:19:08.685 "dma_device_type": 1 00:19:08.685 }, 00:19:08.685 { 00:19:08.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.685 "dma_device_type": 2 00:19:08.685 } 00:19:08.685 ], 00:19:08.685 "driver_specific": {} 00:19:08.685 } 00:19:08.685 ] 00:19:08.685 11:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:08.685 11:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:08.685 11:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:08.685 11:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:08.954 BaseBdev4 00:19:08.954 11:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:08.954 11:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:08.954 11:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:08.954 11:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:08.954 11:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:08.954 11:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:08.954 11:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.213 11:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:09.486 [ 00:19:09.486 { 00:19:09.486 "name": "BaseBdev4", 00:19:09.486 "aliases": [ 00:19:09.486 "82749985-ae53-4b70-9404-f9de05c19ec5" 00:19:09.486 ], 00:19:09.486 "product_name": "Malloc disk", 00:19:09.486 "block_size": 512, 00:19:09.486 "num_blocks": 65536, 00:19:09.486 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:09.486 "assigned_rate_limits": { 00:19:09.486 "rw_ios_per_sec": 0, 00:19:09.486 "rw_mbytes_per_sec": 0, 00:19:09.486 "r_mbytes_per_sec": 0, 00:19:09.486 "w_mbytes_per_sec": 0 00:19:09.486 }, 00:19:09.486 "claimed": false, 00:19:09.486 "zoned": false, 00:19:09.486 "supported_io_types": { 00:19:09.486 "read": true, 00:19:09.486 "write": true, 00:19:09.486 "unmap": true, 00:19:09.486 "flush": true, 00:19:09.486 "reset": true, 00:19:09.486 "nvme_admin": false, 00:19:09.486 "nvme_io": false, 00:19:09.486 "nvme_io_md": false, 00:19:09.486 "write_zeroes": true, 00:19:09.486 "zcopy": true, 00:19:09.486 "get_zone_info": false, 00:19:09.486 "zone_management": false, 00:19:09.486 "zone_append": false, 00:19:09.486 "compare": false, 00:19:09.486 "compare_and_write": false, 00:19:09.486 "abort": true, 00:19:09.486 "seek_hole": false, 00:19:09.486 "seek_data": false, 00:19:09.486 "copy": true, 00:19:09.486 "nvme_iov_md": false 00:19:09.486 }, 00:19:09.486 "memory_domains": [ 00:19:09.486 { 00:19:09.486 "dma_device_id": "system", 00:19:09.486 "dma_device_type": 1 00:19:09.486 }, 00:19:09.486 { 00:19:09.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.486 "dma_device_type": 2 00:19:09.486 } 00:19:09.486 ], 00:19:09.486 "driver_specific": {} 00:19:09.486 } 00:19:09.486 ] 00:19:09.486 11:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:09.486 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:09.486 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:09.486 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:09.745 [2024-07-25 11:30:25.440142] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:09.745 [2024-07-25 11:30:25.440216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:09.745 [2024-07-25 11:30:25.440262] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.745 [2024-07-25 11:30:25.442651] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:09.745 [2024-07-25 11:30:25.442723] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.745 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.003 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.003 "name": "Existed_Raid", 00:19:10.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.003 "strip_size_kb": 0, 00:19:10.003 "state": "configuring", 00:19:10.003 "raid_level": "raid1", 00:19:10.003 "superblock": false, 00:19:10.003 "num_base_bdevs": 4, 00:19:10.003 "num_base_bdevs_discovered": 3, 00:19:10.003 "num_base_bdevs_operational": 4, 00:19:10.003 "base_bdevs_list": [ 00:19:10.003 { 00:19:10.003 "name": "BaseBdev1", 00:19:10.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.003 "is_configured": false, 00:19:10.003 "data_offset": 0, 00:19:10.003 "data_size": 0 00:19:10.003 }, 00:19:10.003 { 00:19:10.003 "name": "BaseBdev2", 00:19:10.003 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:10.003 "is_configured": true, 00:19:10.003 "data_offset": 0, 00:19:10.003 "data_size": 65536 00:19:10.003 }, 00:19:10.003 { 00:19:10.003 "name": "BaseBdev3", 00:19:10.003 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:10.003 "is_configured": true, 00:19:10.003 "data_offset": 0, 00:19:10.003 "data_size": 65536 00:19:10.003 }, 00:19:10.003 { 00:19:10.003 "name": "BaseBdev4", 00:19:10.003 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:10.003 "is_configured": true, 00:19:10.003 "data_offset": 0, 00:19:10.003 "data_size": 65536 00:19:10.003 } 00:19:10.003 ] 00:19:10.003 }' 00:19:10.003 11:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.003 11:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:10.937 [2024-07-25 11:30:26.764481] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.937 11:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.503 11:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.503 "name": "Existed_Raid", 00:19:11.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.503 "strip_size_kb": 0, 00:19:11.503 "state": "configuring", 00:19:11.503 "raid_level": "raid1", 00:19:11.503 "superblock": false, 00:19:11.503 "num_base_bdevs": 4, 00:19:11.503 "num_base_bdevs_discovered": 2, 00:19:11.503 "num_base_bdevs_operational": 4, 00:19:11.503 "base_bdevs_list": [ 00:19:11.503 { 00:19:11.503 "name": "BaseBdev1", 00:19:11.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.503 "is_configured": false, 00:19:11.503 "data_offset": 0, 00:19:11.503 "data_size": 0 00:19:11.503 }, 00:19:11.503 { 00:19:11.503 "name": null, 00:19:11.503 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:11.503 "is_configured": false, 00:19:11.503 "data_offset": 0, 00:19:11.503 "data_size": 65536 00:19:11.503 }, 00:19:11.503 { 00:19:11.503 "name": "BaseBdev3", 00:19:11.503 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:11.503 "is_configured": true, 00:19:11.503 "data_offset": 0, 00:19:11.503 "data_size": 65536 00:19:11.503 }, 00:19:11.503 { 00:19:11.503 "name": "BaseBdev4", 00:19:11.503 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:11.503 "is_configured": true, 00:19:11.503 "data_offset": 0, 00:19:11.503 "data_size": 65536 00:19:11.503 } 00:19:11.503 ] 00:19:11.503 }' 00:19:11.504 11:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.504 11:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.070 11:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:12.070 11:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.328 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:12.328 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:12.587 [2024-07-25 11:30:28.412670] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.587 BaseBdev1 00:19:12.587 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:12.587 11:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:12.587 11:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:12.587 11:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:12.587 11:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:12.587 11:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:12.587 11:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.845 11:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.103 [ 00:19:13.103 { 00:19:13.103 "name": "BaseBdev1", 00:19:13.103 "aliases": [ 00:19:13.103 "64be7c47-7350-49b7-b4ae-6f3ce3080bc0" 00:19:13.103 ], 00:19:13.103 "product_name": "Malloc disk", 00:19:13.103 "block_size": 512, 00:19:13.103 "num_blocks": 65536, 00:19:13.103 "uuid": "64be7c47-7350-49b7-b4ae-6f3ce3080bc0", 00:19:13.103 "assigned_rate_limits": { 00:19:13.103 "rw_ios_per_sec": 0, 00:19:13.103 "rw_mbytes_per_sec": 0, 00:19:13.103 "r_mbytes_per_sec": 0, 00:19:13.103 "w_mbytes_per_sec": 0 00:19:13.103 }, 00:19:13.103 "claimed": true, 00:19:13.103 "claim_type": "exclusive_write", 00:19:13.103 "zoned": false, 00:19:13.103 "supported_io_types": { 00:19:13.103 "read": true, 00:19:13.103 "write": true, 00:19:13.103 "unmap": true, 00:19:13.103 "flush": true, 00:19:13.103 "reset": true, 00:19:13.103 "nvme_admin": false, 00:19:13.103 "nvme_io": false, 00:19:13.103 "nvme_io_md": false, 00:19:13.103 "write_zeroes": true, 00:19:13.103 "zcopy": true, 00:19:13.103 "get_zone_info": false, 00:19:13.103 "zone_management": false, 00:19:13.103 "zone_append": false, 00:19:13.103 "compare": false, 00:19:13.103 "compare_and_write": false, 00:19:13.103 "abort": true, 00:19:13.103 "seek_hole": false, 00:19:13.103 "seek_data": false, 00:19:13.103 "copy": true, 00:19:13.103 "nvme_iov_md": false 00:19:13.103 }, 00:19:13.103 "memory_domains": [ 00:19:13.103 { 00:19:13.103 "dma_device_id": "system", 00:19:13.103 "dma_device_type": 1 00:19:13.103 }, 00:19:13.103 { 00:19:13.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.103 "dma_device_type": 2 00:19:13.103 } 00:19:13.103 ], 00:19:13.103 "driver_specific": {} 00:19:13.103 } 00:19:13.103 ] 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.103 11:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.670 11:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.670 "name": "Existed_Raid", 00:19:13.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.670 "strip_size_kb": 0, 00:19:13.670 "state": "configuring", 00:19:13.670 "raid_level": "raid1", 00:19:13.670 "superblock": false, 00:19:13.670 "num_base_bdevs": 4, 00:19:13.670 "num_base_bdevs_discovered": 3, 00:19:13.670 "num_base_bdevs_operational": 4, 00:19:13.670 "base_bdevs_list": [ 00:19:13.670 { 00:19:13.670 "name": "BaseBdev1", 00:19:13.670 "uuid": "64be7c47-7350-49b7-b4ae-6f3ce3080bc0", 00:19:13.670 "is_configured": true, 00:19:13.670 "data_offset": 0, 00:19:13.670 "data_size": 65536 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "name": null, 00:19:13.670 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:13.670 "is_configured": false, 00:19:13.670 "data_offset": 0, 00:19:13.670 "data_size": 65536 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "name": "BaseBdev3", 00:19:13.670 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:13.670 "is_configured": true, 00:19:13.670 "data_offset": 0, 00:19:13.670 "data_size": 65536 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "name": "BaseBdev4", 00:19:13.670 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:13.670 "is_configured": true, 00:19:13.670 "data_offset": 0, 00:19:13.670 "data_size": 65536 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }' 00:19:13.670 11:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.670 11:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.236 11:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.236 11:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:14.494 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:14.494 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:14.752 [2024-07-25 11:30:30.525374] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.752 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.010 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.010 "name": "Existed_Raid", 00:19:15.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.010 "strip_size_kb": 0, 00:19:15.010 "state": "configuring", 00:19:15.010 "raid_level": "raid1", 00:19:15.010 "superblock": false, 00:19:15.010 "num_base_bdevs": 4, 00:19:15.010 "num_base_bdevs_discovered": 2, 00:19:15.010 "num_base_bdevs_operational": 4, 00:19:15.010 "base_bdevs_list": [ 00:19:15.010 { 00:19:15.010 "name": "BaseBdev1", 00:19:15.010 "uuid": "64be7c47-7350-49b7-b4ae-6f3ce3080bc0", 00:19:15.010 "is_configured": true, 00:19:15.010 "data_offset": 0, 00:19:15.010 "data_size": 65536 00:19:15.010 }, 00:19:15.010 { 00:19:15.010 "name": null, 00:19:15.010 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:15.010 "is_configured": false, 00:19:15.010 "data_offset": 0, 00:19:15.011 "data_size": 65536 00:19:15.011 }, 00:19:15.011 { 00:19:15.011 "name": null, 00:19:15.011 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:15.011 "is_configured": false, 00:19:15.011 "data_offset": 0, 00:19:15.011 "data_size": 65536 00:19:15.011 }, 00:19:15.011 { 00:19:15.011 "name": "BaseBdev4", 00:19:15.011 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:15.011 "is_configured": true, 00:19:15.011 "data_offset": 0, 00:19:15.011 "data_size": 65536 00:19:15.011 } 00:19:15.011 ] 00:19:15.011 }' 00:19:15.011 11:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.011 11:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.945 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:15.945 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.203 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:16.203 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:16.203 [2024-07-25 11:30:32.057794] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.203 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.770 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.770 "name": "Existed_Raid", 00:19:16.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.770 "strip_size_kb": 0, 00:19:16.770 "state": "configuring", 00:19:16.770 "raid_level": "raid1", 00:19:16.770 "superblock": false, 00:19:16.770 "num_base_bdevs": 4, 00:19:16.770 "num_base_bdevs_discovered": 3, 00:19:16.770 "num_base_bdevs_operational": 4, 00:19:16.770 "base_bdevs_list": [ 00:19:16.770 { 00:19:16.770 "name": "BaseBdev1", 00:19:16.770 "uuid": "64be7c47-7350-49b7-b4ae-6f3ce3080bc0", 00:19:16.770 "is_configured": true, 00:19:16.770 "data_offset": 0, 00:19:16.770 "data_size": 65536 00:19:16.770 }, 00:19:16.770 { 00:19:16.770 "name": null, 00:19:16.770 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:16.770 "is_configured": false, 00:19:16.770 "data_offset": 0, 00:19:16.770 "data_size": 65536 00:19:16.770 }, 00:19:16.770 { 00:19:16.770 "name": "BaseBdev3", 00:19:16.770 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:16.770 "is_configured": true, 00:19:16.770 "data_offset": 0, 00:19:16.771 "data_size": 65536 00:19:16.771 }, 00:19:16.771 { 00:19:16.771 "name": "BaseBdev4", 00:19:16.771 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:16.771 "is_configured": true, 00:19:16.771 "data_offset": 0, 00:19:16.771 "data_size": 65536 00:19:16.771 } 00:19:16.771 ] 00:19:16.771 }' 00:19:16.771 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.771 11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.336 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.336 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:17.595 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:17.595 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:17.852 [2024-07-25 11:30:33.622214] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:17.852 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:17.852 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:17.852 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:17.852 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:17.852 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:17.852 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:17.852 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.852 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.852 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.852 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.110 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.110 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.110 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.110 "name": "Existed_Raid", 00:19:18.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.110 "strip_size_kb": 0, 00:19:18.110 "state": "configuring", 00:19:18.110 "raid_level": "raid1", 00:19:18.110 "superblock": false, 00:19:18.110 "num_base_bdevs": 4, 00:19:18.110 "num_base_bdevs_discovered": 2, 00:19:18.110 "num_base_bdevs_operational": 4, 00:19:18.110 "base_bdevs_list": [ 00:19:18.110 { 00:19:18.110 "name": null, 00:19:18.110 "uuid": "64be7c47-7350-49b7-b4ae-6f3ce3080bc0", 00:19:18.110 "is_configured": false, 00:19:18.110 "data_offset": 0, 00:19:18.110 "data_size": 65536 00:19:18.110 }, 00:19:18.110 { 00:19:18.110 "name": null, 00:19:18.110 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:18.110 "is_configured": false, 00:19:18.110 "data_offset": 0, 00:19:18.110 "data_size": 65536 00:19:18.110 }, 00:19:18.110 { 00:19:18.110 "name": "BaseBdev3", 00:19:18.110 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:18.110 "is_configured": true, 00:19:18.110 "data_offset": 0, 00:19:18.110 "data_size": 65536 00:19:18.110 }, 00:19:18.110 { 00:19:18.110 "name": "BaseBdev4", 00:19:18.110 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:18.110 "is_configured": true, 00:19:18.110 "data_offset": 0, 00:19:18.110 "data_size": 65536 00:19:18.110 } 00:19:18.110 ] 00:19:18.110 }' 00:19:18.110 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.110 11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.044 11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.044 11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:19.044 11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:19.044 11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:19.303 [2024-07-25 11:30:35.135002] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.303 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.562 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.562 "name": "Existed_Raid", 00:19:19.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.562 "strip_size_kb": 0, 00:19:19.562 "state": "configuring", 00:19:19.562 "raid_level": "raid1", 00:19:19.562 "superblock": false, 00:19:19.562 "num_base_bdevs": 4, 00:19:19.562 "num_base_bdevs_discovered": 3, 00:19:19.562 "num_base_bdevs_operational": 4, 00:19:19.562 "base_bdevs_list": [ 00:19:19.562 { 00:19:19.562 "name": null, 00:19:19.562 "uuid": "64be7c47-7350-49b7-b4ae-6f3ce3080bc0", 00:19:19.562 "is_configured": false, 00:19:19.562 "data_offset": 0, 00:19:19.562 "data_size": 65536 00:19:19.562 }, 00:19:19.562 { 00:19:19.562 "name": "BaseBdev2", 00:19:19.562 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:19.562 "is_configured": true, 00:19:19.562 "data_offset": 0, 00:19:19.562 "data_size": 65536 00:19:19.562 }, 00:19:19.562 { 00:19:19.562 "name": "BaseBdev3", 00:19:19.562 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:19.562 "is_configured": true, 00:19:19.562 "data_offset": 0, 00:19:19.562 "data_size": 65536 00:19:19.562 }, 00:19:19.562 { 00:19:19.562 "name": "BaseBdev4", 00:19:19.562 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:19.562 "is_configured": true, 00:19:19.562 "data_offset": 0, 00:19:19.562 "data_size": 65536 00:19:19.562 } 00:19:19.562 ] 00:19:19.562 }' 00:19:19.562 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.562 11:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.498 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.498 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:20.498 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:20.498 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:20.498 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.756 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 64be7c47-7350-49b7-b4ae-6f3ce3080bc0 00:19:21.014 [2024-07-25 11:30:36.887497] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:21.014 [2024-07-25 11:30:36.887569] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:21.014 [2024-07-25 11:30:36.887581] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:21.014 [2024-07-25 11:30:36.887951] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:21.014 [2024-07-25 11:30:36.888145] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:21.014 [2024-07-25 11:30:36.888174] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:21.014 [2024-07-25 11:30:36.888465] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.014 NewBaseBdev 00:19:21.272 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:21.272 11:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:21.272 11:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:21.272 11:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:21.272 11:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:21.272 11:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:21.272 11:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:21.530 11:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:21.789 [ 00:19:21.789 { 00:19:21.789 "name": "NewBaseBdev", 00:19:21.789 "aliases": [ 00:19:21.789 "64be7c47-7350-49b7-b4ae-6f3ce3080bc0" 00:19:21.789 ], 00:19:21.789 "product_name": "Malloc disk", 00:19:21.789 "block_size": 512, 00:19:21.789 "num_blocks": 65536, 00:19:21.789 "uuid": "64be7c47-7350-49b7-b4ae-6f3ce3080bc0", 00:19:21.789 "assigned_rate_limits": { 00:19:21.789 "rw_ios_per_sec": 0, 00:19:21.789 "rw_mbytes_per_sec": 0, 00:19:21.789 "r_mbytes_per_sec": 0, 00:19:21.789 "w_mbytes_per_sec": 0 00:19:21.789 }, 00:19:21.789 "claimed": true, 00:19:21.789 "claim_type": "exclusive_write", 00:19:21.789 "zoned": false, 00:19:21.789 "supported_io_types": { 00:19:21.789 "read": true, 00:19:21.789 "write": true, 00:19:21.789 "unmap": true, 00:19:21.789 "flush": true, 00:19:21.789 "reset": true, 00:19:21.789 "nvme_admin": false, 00:19:21.789 "nvme_io": false, 00:19:21.789 "nvme_io_md": false, 00:19:21.789 "write_zeroes": true, 00:19:21.789 "zcopy": true, 00:19:21.789 "get_zone_info": false, 00:19:21.789 "zone_management": false, 00:19:21.789 "zone_append": false, 00:19:21.789 "compare": false, 00:19:21.789 "compare_and_write": false, 00:19:21.789 "abort": true, 00:19:21.789 "seek_hole": false, 00:19:21.789 "seek_data": false, 00:19:21.789 "copy": true, 00:19:21.789 "nvme_iov_md": false 00:19:21.789 }, 00:19:21.789 "memory_domains": [ 00:19:21.789 { 00:19:21.789 "dma_device_id": "system", 00:19:21.789 "dma_device_type": 1 00:19:21.789 }, 00:19:21.789 { 00:19:21.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.789 "dma_device_type": 2 00:19:21.789 } 00:19:21.789 ], 00:19:21.789 "driver_specific": {} 00:19:21.789 } 00:19:21.789 ] 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.789 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.048 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.048 "name": "Existed_Raid", 00:19:22.048 "uuid": "6872d850-8e50-4a73-9d2d-9be677d86d09", 00:19:22.048 "strip_size_kb": 0, 00:19:22.048 "state": "online", 00:19:22.048 "raid_level": "raid1", 00:19:22.048 "superblock": false, 00:19:22.048 "num_base_bdevs": 4, 00:19:22.048 "num_base_bdevs_discovered": 4, 00:19:22.048 "num_base_bdevs_operational": 4, 00:19:22.048 "base_bdevs_list": [ 00:19:22.048 { 00:19:22.048 "name": "NewBaseBdev", 00:19:22.048 "uuid": "64be7c47-7350-49b7-b4ae-6f3ce3080bc0", 00:19:22.048 "is_configured": true, 00:19:22.048 "data_offset": 0, 00:19:22.048 "data_size": 65536 00:19:22.048 }, 00:19:22.048 { 00:19:22.048 "name": "BaseBdev2", 00:19:22.048 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:22.048 "is_configured": true, 00:19:22.048 "data_offset": 0, 00:19:22.048 "data_size": 65536 00:19:22.048 }, 00:19:22.048 { 00:19:22.048 "name": "BaseBdev3", 00:19:22.048 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:22.048 "is_configured": true, 00:19:22.048 "data_offset": 0, 00:19:22.048 "data_size": 65536 00:19:22.048 }, 00:19:22.048 { 00:19:22.048 "name": "BaseBdev4", 00:19:22.048 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:22.048 "is_configured": true, 00:19:22.048 "data_offset": 0, 00:19:22.048 "data_size": 65536 00:19:22.048 } 00:19:22.048 ] 00:19:22.048 }' 00:19:22.048 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.048 11:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.680 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:22.680 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:22.680 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:22.680 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:22.680 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:22.680 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:22.680 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:22.680 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:22.939 [2024-07-25 11:30:38.744541] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.939 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:22.939 "name": "Existed_Raid", 00:19:22.939 "aliases": [ 00:19:22.939 "6872d850-8e50-4a73-9d2d-9be677d86d09" 00:19:22.939 ], 00:19:22.939 "product_name": "Raid Volume", 00:19:22.939 "block_size": 512, 00:19:22.939 "num_blocks": 65536, 00:19:22.939 "uuid": "6872d850-8e50-4a73-9d2d-9be677d86d09", 00:19:22.939 "assigned_rate_limits": { 00:19:22.939 "rw_ios_per_sec": 0, 00:19:22.939 "rw_mbytes_per_sec": 0, 00:19:22.939 "r_mbytes_per_sec": 0, 00:19:22.939 "w_mbytes_per_sec": 0 00:19:22.939 }, 00:19:22.939 "claimed": false, 00:19:22.939 "zoned": false, 00:19:22.939 "supported_io_types": { 00:19:22.939 "read": true, 00:19:22.939 "write": true, 00:19:22.939 "unmap": false, 00:19:22.939 "flush": false, 00:19:22.939 "reset": true, 00:19:22.939 "nvme_admin": false, 00:19:22.939 "nvme_io": false, 00:19:22.939 "nvme_io_md": false, 00:19:22.939 "write_zeroes": true, 00:19:22.939 "zcopy": false, 00:19:22.939 "get_zone_info": false, 00:19:22.939 "zone_management": false, 00:19:22.939 "zone_append": false, 00:19:22.939 "compare": false, 00:19:22.939 "compare_and_write": false, 00:19:22.939 "abort": false, 00:19:22.939 "seek_hole": false, 00:19:22.939 "seek_data": false, 00:19:22.939 "copy": false, 00:19:22.939 "nvme_iov_md": false 00:19:22.939 }, 00:19:22.939 "memory_domains": [ 00:19:22.939 { 00:19:22.939 "dma_device_id": "system", 00:19:22.939 "dma_device_type": 1 00:19:22.939 }, 00:19:22.939 { 00:19:22.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.939 "dma_device_type": 2 00:19:22.939 }, 00:19:22.939 { 00:19:22.939 "dma_device_id": "system", 00:19:22.939 "dma_device_type": 1 00:19:22.939 }, 00:19:22.939 { 00:19:22.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.939 "dma_device_type": 2 00:19:22.939 }, 00:19:22.939 { 00:19:22.939 "dma_device_id": "system", 00:19:22.939 "dma_device_type": 1 00:19:22.939 }, 00:19:22.939 { 00:19:22.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.939 "dma_device_type": 2 00:19:22.939 }, 00:19:22.939 { 00:19:22.939 "dma_device_id": "system", 00:19:22.939 "dma_device_type": 1 00:19:22.939 }, 00:19:22.939 { 00:19:22.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.939 "dma_device_type": 2 00:19:22.939 } 00:19:22.939 ], 00:19:22.939 "driver_specific": { 00:19:22.939 "raid": { 00:19:22.939 "uuid": "6872d850-8e50-4a73-9d2d-9be677d86d09", 00:19:22.939 "strip_size_kb": 0, 00:19:22.939 "state": "online", 00:19:22.939 "raid_level": "raid1", 00:19:22.939 "superblock": false, 00:19:22.939 "num_base_bdevs": 4, 00:19:22.939 "num_base_bdevs_discovered": 4, 00:19:22.939 "num_base_bdevs_operational": 4, 00:19:22.939 "base_bdevs_list": [ 00:19:22.939 { 00:19:22.939 "name": "NewBaseBdev", 00:19:22.939 "uuid": "64be7c47-7350-49b7-b4ae-6f3ce3080bc0", 00:19:22.939 "is_configured": true, 00:19:22.939 "data_offset": 0, 00:19:22.939 "data_size": 65536 00:19:22.939 }, 00:19:22.939 { 00:19:22.939 "name": "BaseBdev2", 00:19:22.939 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:22.939 "is_configured": true, 00:19:22.939 "data_offset": 0, 00:19:22.939 "data_size": 65536 00:19:22.939 }, 00:19:22.939 { 00:19:22.939 "name": "BaseBdev3", 00:19:22.939 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:22.939 "is_configured": true, 00:19:22.939 "data_offset": 0, 00:19:22.939 "data_size": 65536 00:19:22.939 }, 00:19:22.939 { 00:19:22.939 "name": "BaseBdev4", 00:19:22.939 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:22.939 "is_configured": true, 00:19:22.939 "data_offset": 0, 00:19:22.939 "data_size": 65536 00:19:22.939 } 00:19:22.939 ] 00:19:22.939 } 00:19:22.939 } 00:19:22.939 }' 00:19:22.939 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:23.198 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:23.198 BaseBdev2 00:19:23.198 BaseBdev3 00:19:23.198 BaseBdev4' 00:19:23.198 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:23.198 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:23.198 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:23.198 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:23.198 "name": "NewBaseBdev", 00:19:23.198 "aliases": [ 00:19:23.198 "64be7c47-7350-49b7-b4ae-6f3ce3080bc0" 00:19:23.198 ], 00:19:23.198 "product_name": "Malloc disk", 00:19:23.198 "block_size": 512, 00:19:23.198 "num_blocks": 65536, 00:19:23.198 "uuid": "64be7c47-7350-49b7-b4ae-6f3ce3080bc0", 00:19:23.198 "assigned_rate_limits": { 00:19:23.198 "rw_ios_per_sec": 0, 00:19:23.198 "rw_mbytes_per_sec": 0, 00:19:23.198 "r_mbytes_per_sec": 0, 00:19:23.198 "w_mbytes_per_sec": 0 00:19:23.198 }, 00:19:23.198 "claimed": true, 00:19:23.198 "claim_type": "exclusive_write", 00:19:23.198 "zoned": false, 00:19:23.198 "supported_io_types": { 00:19:23.198 "read": true, 00:19:23.198 "write": true, 00:19:23.198 "unmap": true, 00:19:23.198 "flush": true, 00:19:23.198 "reset": true, 00:19:23.198 "nvme_admin": false, 00:19:23.198 "nvme_io": false, 00:19:23.198 "nvme_io_md": false, 00:19:23.198 "write_zeroes": true, 00:19:23.198 "zcopy": true, 00:19:23.198 "get_zone_info": false, 00:19:23.198 "zone_management": false, 00:19:23.198 "zone_append": false, 00:19:23.198 "compare": false, 00:19:23.198 "compare_and_write": false, 00:19:23.198 "abort": true, 00:19:23.198 "seek_hole": false, 00:19:23.198 "seek_data": false, 00:19:23.198 "copy": true, 00:19:23.198 "nvme_iov_md": false 00:19:23.198 }, 00:19:23.198 "memory_domains": [ 00:19:23.198 { 00:19:23.198 "dma_device_id": "system", 00:19:23.198 "dma_device_type": 1 00:19:23.198 }, 00:19:23.198 { 00:19:23.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.198 "dma_device_type": 2 00:19:23.198 } 00:19:23.198 ], 00:19:23.198 "driver_specific": {} 00:19:23.198 }' 00:19:23.198 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.456 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.456 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:23.456 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.456 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.456 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:23.456 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:23.456 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:23.714 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:23.714 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:23.714 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:23.714 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:23.714 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:23.714 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:23.714 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:23.973 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:23.973 "name": "BaseBdev2", 00:19:23.973 "aliases": [ 00:19:23.973 "2f327f57-bde9-4a11-b28b-0a0b5d480090" 00:19:23.973 ], 00:19:23.973 "product_name": "Malloc disk", 00:19:23.973 "block_size": 512, 00:19:23.973 "num_blocks": 65536, 00:19:23.973 "uuid": "2f327f57-bde9-4a11-b28b-0a0b5d480090", 00:19:23.973 "assigned_rate_limits": { 00:19:23.973 "rw_ios_per_sec": 0, 00:19:23.973 "rw_mbytes_per_sec": 0, 00:19:23.973 "r_mbytes_per_sec": 0, 00:19:23.973 "w_mbytes_per_sec": 0 00:19:23.973 }, 00:19:23.973 "claimed": true, 00:19:23.973 "claim_type": "exclusive_write", 00:19:23.973 "zoned": false, 00:19:23.973 "supported_io_types": { 00:19:23.973 "read": true, 00:19:23.973 "write": true, 00:19:23.973 "unmap": true, 00:19:23.973 "flush": true, 00:19:23.973 "reset": true, 00:19:23.973 "nvme_admin": false, 00:19:23.973 "nvme_io": false, 00:19:23.973 "nvme_io_md": false, 00:19:23.973 "write_zeroes": true, 00:19:23.973 "zcopy": true, 00:19:23.973 "get_zone_info": false, 00:19:23.973 "zone_management": false, 00:19:23.973 "zone_append": false, 00:19:23.973 "compare": false, 00:19:23.973 "compare_and_write": false, 00:19:23.973 "abort": true, 00:19:23.973 "seek_hole": false, 00:19:23.973 "seek_data": false, 00:19:23.973 "copy": true, 00:19:23.973 "nvme_iov_md": false 00:19:23.973 }, 00:19:23.973 "memory_domains": [ 00:19:23.973 { 00:19:23.973 "dma_device_id": "system", 00:19:23.973 "dma_device_type": 1 00:19:23.973 }, 00:19:23.973 { 00:19:23.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.973 "dma_device_type": 2 00:19:23.973 } 00:19:23.973 ], 00:19:23.973 "driver_specific": {} 00:19:23.973 }' 00:19:23.973 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.973 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.973 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:23.973 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.231 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.231 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:24.231 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.231 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.231 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:24.231 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.231 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.489 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:24.490 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.490 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:24.490 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:24.748 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:24.748 "name": "BaseBdev3", 00:19:24.748 "aliases": [ 00:19:24.748 "667fac5e-4289-42c5-9c7a-02aa3e367a27" 00:19:24.748 ], 00:19:24.748 "product_name": "Malloc disk", 00:19:24.748 "block_size": 512, 00:19:24.748 "num_blocks": 65536, 00:19:24.748 "uuid": "667fac5e-4289-42c5-9c7a-02aa3e367a27", 00:19:24.748 "assigned_rate_limits": { 00:19:24.748 "rw_ios_per_sec": 0, 00:19:24.748 "rw_mbytes_per_sec": 0, 00:19:24.748 "r_mbytes_per_sec": 0, 00:19:24.749 "w_mbytes_per_sec": 0 00:19:24.749 }, 00:19:24.749 "claimed": true, 00:19:24.749 "claim_type": "exclusive_write", 00:19:24.749 "zoned": false, 00:19:24.749 "supported_io_types": { 00:19:24.749 "read": true, 00:19:24.749 "write": true, 00:19:24.749 "unmap": true, 00:19:24.749 "flush": true, 00:19:24.749 "reset": true, 00:19:24.749 "nvme_admin": false, 00:19:24.749 "nvme_io": false, 00:19:24.749 "nvme_io_md": false, 00:19:24.749 "write_zeroes": true, 00:19:24.749 "zcopy": true, 00:19:24.749 "get_zone_info": false, 00:19:24.749 "zone_management": false, 00:19:24.749 "zone_append": false, 00:19:24.749 "compare": false, 00:19:24.749 "compare_and_write": false, 00:19:24.749 "abort": true, 00:19:24.749 "seek_hole": false, 00:19:24.749 "seek_data": false, 00:19:24.749 "copy": true, 00:19:24.749 "nvme_iov_md": false 00:19:24.749 }, 00:19:24.749 "memory_domains": [ 00:19:24.749 { 00:19:24.749 "dma_device_id": "system", 00:19:24.749 "dma_device_type": 1 00:19:24.749 }, 00:19:24.749 { 00:19:24.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.749 "dma_device_type": 2 00:19:24.749 } 00:19:24.749 ], 00:19:24.749 "driver_specific": {} 00:19:24.749 }' 00:19:24.749 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.749 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.749 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:24.749 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.749 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.749 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:24.749 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.749 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.007 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.007 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.007 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.007 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.007 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:25.007 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:25.007 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.265 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.265 "name": "BaseBdev4", 00:19:25.265 "aliases": [ 00:19:25.265 "82749985-ae53-4b70-9404-f9de05c19ec5" 00:19:25.265 ], 00:19:25.265 "product_name": "Malloc disk", 00:19:25.265 "block_size": 512, 00:19:25.265 "num_blocks": 65536, 00:19:25.265 "uuid": "82749985-ae53-4b70-9404-f9de05c19ec5", 00:19:25.265 "assigned_rate_limits": { 00:19:25.265 "rw_ios_per_sec": 0, 00:19:25.265 "rw_mbytes_per_sec": 0, 00:19:25.265 "r_mbytes_per_sec": 0, 00:19:25.265 "w_mbytes_per_sec": 0 00:19:25.265 }, 00:19:25.265 "claimed": true, 00:19:25.265 "claim_type": "exclusive_write", 00:19:25.265 "zoned": false, 00:19:25.265 "supported_io_types": { 00:19:25.265 "read": true, 00:19:25.265 "write": true, 00:19:25.265 "unmap": true, 00:19:25.265 "flush": true, 00:19:25.265 "reset": true, 00:19:25.265 "nvme_admin": false, 00:19:25.265 "nvme_io": false, 00:19:25.265 "nvme_io_md": false, 00:19:25.265 "write_zeroes": true, 00:19:25.265 "zcopy": true, 00:19:25.265 "get_zone_info": false, 00:19:25.265 "zone_management": false, 00:19:25.265 "zone_append": false, 00:19:25.265 "compare": false, 00:19:25.265 "compare_and_write": false, 00:19:25.265 "abort": true, 00:19:25.266 "seek_hole": false, 00:19:25.266 "seek_data": false, 00:19:25.266 "copy": true, 00:19:25.266 "nvme_iov_md": false 00:19:25.266 }, 00:19:25.266 "memory_domains": [ 00:19:25.266 { 00:19:25.266 "dma_device_id": "system", 00:19:25.266 "dma_device_type": 1 00:19:25.266 }, 00:19:25.266 { 00:19:25.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.266 "dma_device_type": 2 00:19:25.266 } 00:19:25.266 ], 00:19:25.266 "driver_specific": {} 00:19:25.266 }' 00:19:25.266 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.266 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.266 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.266 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.525 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.525 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.525 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.525 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.525 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.525 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.525 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.525 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.525 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:25.784 [2024-07-25 11:30:41.596993] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.784 [2024-07-25 11:30:41.597050] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.784 [2024-07-25 11:30:41.597180] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.784 [2024-07-25 11:30:41.597567] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.784 [2024-07-25 11:30:41.597593] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 82319 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82319 ']' 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82319 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82319 00:19:25.784 killing process with pid 82319 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82319' 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82319 00:19:25.784 [2024-07-25 11:30:41.649125] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.784 11:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82319 00:19:26.349 [2024-07-25 11:30:42.008878] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.319 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:27.319 00:19:27.319 real 0m38.471s 00:19:27.319 user 1m10.717s 00:19:27.319 sys 0m4.825s 00:19:27.319 11:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.319 ************************************ 00:19:27.319 END TEST raid_state_function_test 00:19:27.319 ************************************ 00:19:27.319 11:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.578 11:30:43 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:27.578 11:30:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:27.578 11:30:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.578 11:30:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.578 ************************************ 00:19:27.578 START TEST raid_state_function_test_sb 00:19:27.578 ************************************ 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=83420 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 83420' 00:19:27.578 Process raid pid: 83420 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 83420 /var/tmp/spdk-raid.sock 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83420 ']' 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.578 11:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:27.578 [2024-07-25 11:30:43.362453] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:19:27.578 [2024-07-25 11:30:43.362671] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.836 [2024-07-25 11:30:43.541204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.093 [2024-07-25 11:30:43.817369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.352 [2024-07-25 11:30:44.033295] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.352 [2024-07-25 11:30:44.033361] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.610 11:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.610 11:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:28.610 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:28.869 [2024-07-25 11:30:44.524669] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.869 [2024-07-25 11:30:44.524747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.869 [2024-07-25 11:30:44.524766] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.869 [2024-07-25 11:30:44.524779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.869 [2024-07-25 11:30:44.524794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:28.869 [2024-07-25 11:30:44.524806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:28.869 [2024-07-25 11:30:44.524818] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:28.869 [2024-07-25 11:30:44.524829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.869 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.126 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.126 "name": "Existed_Raid", 00:19:29.126 "uuid": "a3c13b52-1864-41ed-bc78-1c7f590155b9", 00:19:29.126 "strip_size_kb": 0, 00:19:29.126 "state": "configuring", 00:19:29.126 "raid_level": "raid1", 00:19:29.126 "superblock": true, 00:19:29.126 "num_base_bdevs": 4, 00:19:29.126 "num_base_bdevs_discovered": 0, 00:19:29.126 "num_base_bdevs_operational": 4, 00:19:29.126 "base_bdevs_list": [ 00:19:29.126 { 00:19:29.126 "name": "BaseBdev1", 00:19:29.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.126 "is_configured": false, 00:19:29.126 "data_offset": 0, 00:19:29.126 "data_size": 0 00:19:29.126 }, 00:19:29.126 { 00:19:29.126 "name": "BaseBdev2", 00:19:29.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.126 "is_configured": false, 00:19:29.126 "data_offset": 0, 00:19:29.126 "data_size": 0 00:19:29.126 }, 00:19:29.126 { 00:19:29.126 "name": "BaseBdev3", 00:19:29.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.126 "is_configured": false, 00:19:29.126 "data_offset": 0, 00:19:29.126 "data_size": 0 00:19:29.126 }, 00:19:29.126 { 00:19:29.126 "name": "BaseBdev4", 00:19:29.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.126 "is_configured": false, 00:19:29.126 "data_offset": 0, 00:19:29.126 "data_size": 0 00:19:29.126 } 00:19:29.126 ] 00:19:29.126 }' 00:19:29.126 11:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.127 11:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.692 11:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:29.950 [2024-07-25 11:30:45.720838] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.950 [2024-07-25 11:30:45.720910] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:29.950 11:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:30.208 [2024-07-25 11:30:45.944915] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:30.208 [2024-07-25 11:30:45.944989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:30.208 [2024-07-25 11:30:45.945007] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.208 [2024-07-25 11:30:45.945020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.208 [2024-07-25 11:30:45.945032] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:30.208 [2024-07-25 11:30:45.945043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:30.208 [2024-07-25 11:30:45.945055] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:30.208 [2024-07-25 11:30:45.945066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:30.208 11:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:30.466 BaseBdev1 00:19:30.466 [2024-07-25 11:30:46.209713] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.466 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:30.466 11:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:30.466 11:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:30.466 11:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:30.466 11:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:30.466 11:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:30.466 11:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.724 11:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:30.982 [ 00:19:30.982 { 00:19:30.982 "name": "BaseBdev1", 00:19:30.982 "aliases": [ 00:19:30.982 "8dd6a7fb-c7cf-41d5-97d5-12b3fbb5b7a3" 00:19:30.982 ], 00:19:30.982 "product_name": "Malloc disk", 00:19:30.982 "block_size": 512, 00:19:30.982 "num_blocks": 65536, 00:19:30.982 "uuid": "8dd6a7fb-c7cf-41d5-97d5-12b3fbb5b7a3", 00:19:30.982 "assigned_rate_limits": { 00:19:30.982 "rw_ios_per_sec": 0, 00:19:30.982 "rw_mbytes_per_sec": 0, 00:19:30.982 "r_mbytes_per_sec": 0, 00:19:30.982 "w_mbytes_per_sec": 0 00:19:30.982 }, 00:19:30.982 "claimed": true, 00:19:30.982 "claim_type": "exclusive_write", 00:19:30.982 "zoned": false, 00:19:30.982 "supported_io_types": { 00:19:30.982 "read": true, 00:19:30.982 "write": true, 00:19:30.982 "unmap": true, 00:19:30.982 "flush": true, 00:19:30.982 "reset": true, 00:19:30.982 "nvme_admin": false, 00:19:30.982 "nvme_io": false, 00:19:30.982 "nvme_io_md": false, 00:19:30.982 "write_zeroes": true, 00:19:30.982 "zcopy": true, 00:19:30.982 "get_zone_info": false, 00:19:30.982 "zone_management": false, 00:19:30.982 "zone_append": false, 00:19:30.982 "compare": false, 00:19:30.982 "compare_and_write": false, 00:19:30.982 "abort": true, 00:19:30.982 "seek_hole": false, 00:19:30.982 "seek_data": false, 00:19:30.982 "copy": true, 00:19:30.982 "nvme_iov_md": false 00:19:30.982 }, 00:19:30.982 "memory_domains": [ 00:19:30.982 { 00:19:30.982 "dma_device_id": "system", 00:19:30.982 "dma_device_type": 1 00:19:30.982 }, 00:19:30.982 { 00:19:30.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.982 "dma_device_type": 2 00:19:30.982 } 00:19:30.982 ], 00:19:30.982 "driver_specific": {} 00:19:30.982 } 00:19:30.982 ] 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.982 11:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.240 11:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.240 "name": "Existed_Raid", 00:19:31.240 "uuid": "ffc8615f-f622-41de-bcab-06db407586bf", 00:19:31.240 "strip_size_kb": 0, 00:19:31.240 "state": "configuring", 00:19:31.240 "raid_level": "raid1", 00:19:31.240 "superblock": true, 00:19:31.240 "num_base_bdevs": 4, 00:19:31.240 "num_base_bdevs_discovered": 1, 00:19:31.240 "num_base_bdevs_operational": 4, 00:19:31.240 "base_bdevs_list": [ 00:19:31.240 { 00:19:31.240 "name": "BaseBdev1", 00:19:31.240 "uuid": "8dd6a7fb-c7cf-41d5-97d5-12b3fbb5b7a3", 00:19:31.240 "is_configured": true, 00:19:31.240 "data_offset": 2048, 00:19:31.240 "data_size": 63488 00:19:31.240 }, 00:19:31.240 { 00:19:31.240 "name": "BaseBdev2", 00:19:31.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.240 "is_configured": false, 00:19:31.240 "data_offset": 0, 00:19:31.240 "data_size": 0 00:19:31.240 }, 00:19:31.240 { 00:19:31.240 "name": "BaseBdev3", 00:19:31.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.240 "is_configured": false, 00:19:31.240 "data_offset": 0, 00:19:31.240 "data_size": 0 00:19:31.240 }, 00:19:31.240 { 00:19:31.240 "name": "BaseBdev4", 00:19:31.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.240 "is_configured": false, 00:19:31.240 "data_offset": 0, 00:19:31.240 "data_size": 0 00:19:31.240 } 00:19:31.240 ] 00:19:31.240 }' 00:19:31.240 11:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.240 11:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.174 11:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:32.174 [2024-07-25 11:30:47.962254] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:32.174 [2024-07-25 11:30:47.962338] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:32.174 11:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:32.433 [2024-07-25 11:30:48.202386] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.433 [2024-07-25 11:30:48.204809] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.433 [2024-07-25 11:30:48.204883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.433 [2024-07-25 11:30:48.204901] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:32.433 [2024-07-25 11:30:48.204914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:32.433 [2024-07-25 11:30:48.204929] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:32.433 [2024-07-25 11:30:48.204940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.433 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.693 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.693 "name": "Existed_Raid", 00:19:32.693 "uuid": "67b76004-57c2-4280-b731-2f51385adc74", 00:19:32.693 "strip_size_kb": 0, 00:19:32.693 "state": "configuring", 00:19:32.693 "raid_level": "raid1", 00:19:32.693 "superblock": true, 00:19:32.693 "num_base_bdevs": 4, 00:19:32.693 "num_base_bdevs_discovered": 1, 00:19:32.693 "num_base_bdevs_operational": 4, 00:19:32.693 "base_bdevs_list": [ 00:19:32.693 { 00:19:32.693 "name": "BaseBdev1", 00:19:32.693 "uuid": "8dd6a7fb-c7cf-41d5-97d5-12b3fbb5b7a3", 00:19:32.693 "is_configured": true, 00:19:32.693 "data_offset": 2048, 00:19:32.693 "data_size": 63488 00:19:32.693 }, 00:19:32.693 { 00:19:32.693 "name": "BaseBdev2", 00:19:32.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.693 "is_configured": false, 00:19:32.693 "data_offset": 0, 00:19:32.693 "data_size": 0 00:19:32.693 }, 00:19:32.693 { 00:19:32.693 "name": "BaseBdev3", 00:19:32.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.693 "is_configured": false, 00:19:32.693 "data_offset": 0, 00:19:32.693 "data_size": 0 00:19:32.693 }, 00:19:32.693 { 00:19:32.693 "name": "BaseBdev4", 00:19:32.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.693 "is_configured": false, 00:19:32.693 "data_offset": 0, 00:19:32.693 "data_size": 0 00:19:32.693 } 00:19:32.693 ] 00:19:32.693 }' 00:19:32.693 11:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.693 11:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.260 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:33.827 [2024-07-25 11:30:49.433356] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.827 BaseBdev2 00:19:33.827 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:33.827 11:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:33.827 11:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:33.827 11:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:33.827 11:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:33.827 11:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:33.827 11:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:33.827 11:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:34.087 [ 00:19:34.087 { 00:19:34.087 "name": "BaseBdev2", 00:19:34.087 "aliases": [ 00:19:34.087 "11552a52-7202-442c-a17a-93cd801b1c5f" 00:19:34.087 ], 00:19:34.087 "product_name": "Malloc disk", 00:19:34.087 "block_size": 512, 00:19:34.087 "num_blocks": 65536, 00:19:34.087 "uuid": "11552a52-7202-442c-a17a-93cd801b1c5f", 00:19:34.087 "assigned_rate_limits": { 00:19:34.087 "rw_ios_per_sec": 0, 00:19:34.087 "rw_mbytes_per_sec": 0, 00:19:34.087 "r_mbytes_per_sec": 0, 00:19:34.087 "w_mbytes_per_sec": 0 00:19:34.087 }, 00:19:34.087 "claimed": true, 00:19:34.087 "claim_type": "exclusive_write", 00:19:34.087 "zoned": false, 00:19:34.087 "supported_io_types": { 00:19:34.087 "read": true, 00:19:34.087 "write": true, 00:19:34.087 "unmap": true, 00:19:34.087 "flush": true, 00:19:34.087 "reset": true, 00:19:34.087 "nvme_admin": false, 00:19:34.087 "nvme_io": false, 00:19:34.087 "nvme_io_md": false, 00:19:34.087 "write_zeroes": true, 00:19:34.087 "zcopy": true, 00:19:34.087 "get_zone_info": false, 00:19:34.088 "zone_management": false, 00:19:34.088 "zone_append": false, 00:19:34.088 "compare": false, 00:19:34.088 "compare_and_write": false, 00:19:34.088 "abort": true, 00:19:34.088 "seek_hole": false, 00:19:34.088 "seek_data": false, 00:19:34.088 "copy": true, 00:19:34.088 "nvme_iov_md": false 00:19:34.088 }, 00:19:34.088 "memory_domains": [ 00:19:34.088 { 00:19:34.088 "dma_device_id": "system", 00:19:34.088 "dma_device_type": 1 00:19:34.088 }, 00:19:34.088 { 00:19:34.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.088 "dma_device_type": 2 00:19:34.088 } 00:19:34.088 ], 00:19:34.088 "driver_specific": {} 00:19:34.088 } 00:19:34.088 ] 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.088 11:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.355 11:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:34.355 "name": "Existed_Raid", 00:19:34.355 "uuid": "67b76004-57c2-4280-b731-2f51385adc74", 00:19:34.355 "strip_size_kb": 0, 00:19:34.355 "state": "configuring", 00:19:34.355 "raid_level": "raid1", 00:19:34.355 "superblock": true, 00:19:34.355 "num_base_bdevs": 4, 00:19:34.355 "num_base_bdevs_discovered": 2, 00:19:34.355 "num_base_bdevs_operational": 4, 00:19:34.355 "base_bdevs_list": [ 00:19:34.355 { 00:19:34.355 "name": "BaseBdev1", 00:19:34.355 "uuid": "8dd6a7fb-c7cf-41d5-97d5-12b3fbb5b7a3", 00:19:34.355 "is_configured": true, 00:19:34.355 "data_offset": 2048, 00:19:34.355 "data_size": 63488 00:19:34.355 }, 00:19:34.355 { 00:19:34.355 "name": "BaseBdev2", 00:19:34.355 "uuid": "11552a52-7202-442c-a17a-93cd801b1c5f", 00:19:34.355 "is_configured": true, 00:19:34.355 "data_offset": 2048, 00:19:34.355 "data_size": 63488 00:19:34.355 }, 00:19:34.355 { 00:19:34.355 "name": "BaseBdev3", 00:19:34.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.355 "is_configured": false, 00:19:34.355 "data_offset": 0, 00:19:34.355 "data_size": 0 00:19:34.355 }, 00:19:34.355 { 00:19:34.355 "name": "BaseBdev4", 00:19:34.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.355 "is_configured": false, 00:19:34.355 "data_offset": 0, 00:19:34.355 "data_size": 0 00:19:34.355 } 00:19:34.355 ] 00:19:34.355 }' 00:19:34.355 11:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:34.355 11:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.290 11:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:35.549 [2024-07-25 11:30:51.185580] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:35.549 BaseBdev3 00:19:35.549 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:35.549 11:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:35.549 11:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:35.549 11:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:35.549 11:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:35.549 11:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:35.549 11:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.807 11:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:36.065 [ 00:19:36.065 { 00:19:36.065 "name": "BaseBdev3", 00:19:36.065 "aliases": [ 00:19:36.065 "a9d583b3-d6b9-46fc-92ee-3a12bc09fa4a" 00:19:36.065 ], 00:19:36.065 "product_name": "Malloc disk", 00:19:36.065 "block_size": 512, 00:19:36.065 "num_blocks": 65536, 00:19:36.065 "uuid": "a9d583b3-d6b9-46fc-92ee-3a12bc09fa4a", 00:19:36.065 "assigned_rate_limits": { 00:19:36.065 "rw_ios_per_sec": 0, 00:19:36.065 "rw_mbytes_per_sec": 0, 00:19:36.065 "r_mbytes_per_sec": 0, 00:19:36.065 "w_mbytes_per_sec": 0 00:19:36.065 }, 00:19:36.065 "claimed": true, 00:19:36.065 "claim_type": "exclusive_write", 00:19:36.065 "zoned": false, 00:19:36.065 "supported_io_types": { 00:19:36.065 "read": true, 00:19:36.065 "write": true, 00:19:36.065 "unmap": true, 00:19:36.065 "flush": true, 00:19:36.065 "reset": true, 00:19:36.065 "nvme_admin": false, 00:19:36.065 "nvme_io": false, 00:19:36.065 "nvme_io_md": false, 00:19:36.065 "write_zeroes": true, 00:19:36.065 "zcopy": true, 00:19:36.065 "get_zone_info": false, 00:19:36.065 "zone_management": false, 00:19:36.065 "zone_append": false, 00:19:36.065 "compare": false, 00:19:36.065 "compare_and_write": false, 00:19:36.065 "abort": true, 00:19:36.065 "seek_hole": false, 00:19:36.065 "seek_data": false, 00:19:36.065 "copy": true, 00:19:36.065 "nvme_iov_md": false 00:19:36.065 }, 00:19:36.065 "memory_domains": [ 00:19:36.065 { 00:19:36.065 "dma_device_id": "system", 00:19:36.065 "dma_device_type": 1 00:19:36.065 }, 00:19:36.065 { 00:19:36.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.065 "dma_device_type": 2 00:19:36.065 } 00:19:36.065 ], 00:19:36.065 "driver_specific": {} 00:19:36.065 } 00:19:36.065 ] 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.065 11:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.324 11:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:36.324 "name": "Existed_Raid", 00:19:36.324 "uuid": "67b76004-57c2-4280-b731-2f51385adc74", 00:19:36.324 "strip_size_kb": 0, 00:19:36.324 "state": "configuring", 00:19:36.324 "raid_level": "raid1", 00:19:36.324 "superblock": true, 00:19:36.324 "num_base_bdevs": 4, 00:19:36.324 "num_base_bdevs_discovered": 3, 00:19:36.324 "num_base_bdevs_operational": 4, 00:19:36.324 "base_bdevs_list": [ 00:19:36.324 { 00:19:36.324 "name": "BaseBdev1", 00:19:36.324 "uuid": "8dd6a7fb-c7cf-41d5-97d5-12b3fbb5b7a3", 00:19:36.324 "is_configured": true, 00:19:36.324 "data_offset": 2048, 00:19:36.324 "data_size": 63488 00:19:36.324 }, 00:19:36.324 { 00:19:36.324 "name": "BaseBdev2", 00:19:36.324 "uuid": "11552a52-7202-442c-a17a-93cd801b1c5f", 00:19:36.324 "is_configured": true, 00:19:36.324 "data_offset": 2048, 00:19:36.324 "data_size": 63488 00:19:36.324 }, 00:19:36.324 { 00:19:36.324 "name": "BaseBdev3", 00:19:36.324 "uuid": "a9d583b3-d6b9-46fc-92ee-3a12bc09fa4a", 00:19:36.324 "is_configured": true, 00:19:36.324 "data_offset": 2048, 00:19:36.324 "data_size": 63488 00:19:36.324 }, 00:19:36.324 { 00:19:36.324 "name": "BaseBdev4", 00:19:36.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.324 "is_configured": false, 00:19:36.324 "data_offset": 0, 00:19:36.324 "data_size": 0 00:19:36.324 } 00:19:36.324 ] 00:19:36.324 }' 00:19:36.324 11:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:36.324 11:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.890 11:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:37.149 [2024-07-25 11:30:52.927784] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:37.149 [2024-07-25 11:30:52.928147] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:37.149 [2024-07-25 11:30:52.928174] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:37.149 [2024-07-25 11:30:52.928507] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:37.149 [2024-07-25 11:30:52.928747] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:37.149 [2024-07-25 11:30:52.928773] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:37.149 BaseBdev4 00:19:37.149 [2024-07-25 11:30:52.928951] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.149 11:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:37.149 11:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:37.149 11:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:37.149 11:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:37.149 11:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:37.149 11:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:37.149 11:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:37.407 11:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:37.665 [ 00:19:37.665 { 00:19:37.665 "name": "BaseBdev4", 00:19:37.665 "aliases": [ 00:19:37.665 "5a0572e2-a081-46de-8b9a-dd16b36c6795" 00:19:37.665 ], 00:19:37.665 "product_name": "Malloc disk", 00:19:37.665 "block_size": 512, 00:19:37.665 "num_blocks": 65536, 00:19:37.665 "uuid": "5a0572e2-a081-46de-8b9a-dd16b36c6795", 00:19:37.665 "assigned_rate_limits": { 00:19:37.665 "rw_ios_per_sec": 0, 00:19:37.665 "rw_mbytes_per_sec": 0, 00:19:37.665 "r_mbytes_per_sec": 0, 00:19:37.665 "w_mbytes_per_sec": 0 00:19:37.665 }, 00:19:37.665 "claimed": true, 00:19:37.665 "claim_type": "exclusive_write", 00:19:37.665 "zoned": false, 00:19:37.665 "supported_io_types": { 00:19:37.665 "read": true, 00:19:37.665 "write": true, 00:19:37.665 "unmap": true, 00:19:37.665 "flush": true, 00:19:37.665 "reset": true, 00:19:37.665 "nvme_admin": false, 00:19:37.665 "nvme_io": false, 00:19:37.665 "nvme_io_md": false, 00:19:37.665 "write_zeroes": true, 00:19:37.665 "zcopy": true, 00:19:37.665 "get_zone_info": false, 00:19:37.665 "zone_management": false, 00:19:37.665 "zone_append": false, 00:19:37.665 "compare": false, 00:19:37.665 "compare_and_write": false, 00:19:37.665 "abort": true, 00:19:37.665 "seek_hole": false, 00:19:37.665 "seek_data": false, 00:19:37.665 "copy": true, 00:19:37.665 "nvme_iov_md": false 00:19:37.665 }, 00:19:37.665 "memory_domains": [ 00:19:37.665 { 00:19:37.665 "dma_device_id": "system", 00:19:37.665 "dma_device_type": 1 00:19:37.665 }, 00:19:37.665 { 00:19:37.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.665 "dma_device_type": 2 00:19:37.665 } 00:19:37.665 ], 00:19:37.665 "driver_specific": {} 00:19:37.665 } 00:19:37.665 ] 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.665 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.924 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:37.924 "name": "Existed_Raid", 00:19:37.924 "uuid": "67b76004-57c2-4280-b731-2f51385adc74", 00:19:37.924 "strip_size_kb": 0, 00:19:37.924 "state": "online", 00:19:37.924 "raid_level": "raid1", 00:19:37.924 "superblock": true, 00:19:37.924 "num_base_bdevs": 4, 00:19:37.924 "num_base_bdevs_discovered": 4, 00:19:37.924 "num_base_bdevs_operational": 4, 00:19:37.924 "base_bdevs_list": [ 00:19:37.924 { 00:19:37.924 "name": "BaseBdev1", 00:19:37.924 "uuid": "8dd6a7fb-c7cf-41d5-97d5-12b3fbb5b7a3", 00:19:37.924 "is_configured": true, 00:19:37.924 "data_offset": 2048, 00:19:37.924 "data_size": 63488 00:19:37.924 }, 00:19:37.924 { 00:19:37.924 "name": "BaseBdev2", 00:19:37.924 "uuid": "11552a52-7202-442c-a17a-93cd801b1c5f", 00:19:37.924 "is_configured": true, 00:19:37.924 "data_offset": 2048, 00:19:37.924 "data_size": 63488 00:19:37.924 }, 00:19:37.924 { 00:19:37.924 "name": "BaseBdev3", 00:19:37.924 "uuid": "a9d583b3-d6b9-46fc-92ee-3a12bc09fa4a", 00:19:37.924 "is_configured": true, 00:19:37.924 "data_offset": 2048, 00:19:37.924 "data_size": 63488 00:19:37.924 }, 00:19:37.924 { 00:19:37.924 "name": "BaseBdev4", 00:19:37.924 "uuid": "5a0572e2-a081-46de-8b9a-dd16b36c6795", 00:19:37.924 "is_configured": true, 00:19:37.924 "data_offset": 2048, 00:19:37.924 "data_size": 63488 00:19:37.924 } 00:19:37.924 ] 00:19:37.924 }' 00:19:37.924 11:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:37.924 11:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:38.490 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:38.490 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:38.490 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:38.490 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:38.490 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:38.490 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:38.490 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:38.490 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:38.748 [2024-07-25 11:30:54.584744] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.748 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:38.748 "name": "Existed_Raid", 00:19:38.748 "aliases": [ 00:19:38.748 "67b76004-57c2-4280-b731-2f51385adc74" 00:19:38.748 ], 00:19:38.748 "product_name": "Raid Volume", 00:19:38.748 "block_size": 512, 00:19:38.748 "num_blocks": 63488, 00:19:38.748 "uuid": "67b76004-57c2-4280-b731-2f51385adc74", 00:19:38.748 "assigned_rate_limits": { 00:19:38.748 "rw_ios_per_sec": 0, 00:19:38.748 "rw_mbytes_per_sec": 0, 00:19:38.748 "r_mbytes_per_sec": 0, 00:19:38.748 "w_mbytes_per_sec": 0 00:19:38.749 }, 00:19:38.749 "claimed": false, 00:19:38.749 "zoned": false, 00:19:38.749 "supported_io_types": { 00:19:38.749 "read": true, 00:19:38.749 "write": true, 00:19:38.749 "unmap": false, 00:19:38.749 "flush": false, 00:19:38.749 "reset": true, 00:19:38.749 "nvme_admin": false, 00:19:38.749 "nvme_io": false, 00:19:38.749 "nvme_io_md": false, 00:19:38.749 "write_zeroes": true, 00:19:38.749 "zcopy": false, 00:19:38.749 "get_zone_info": false, 00:19:38.749 "zone_management": false, 00:19:38.749 "zone_append": false, 00:19:38.749 "compare": false, 00:19:38.749 "compare_and_write": false, 00:19:38.749 "abort": false, 00:19:38.749 "seek_hole": false, 00:19:38.749 "seek_data": false, 00:19:38.749 "copy": false, 00:19:38.749 "nvme_iov_md": false 00:19:38.749 }, 00:19:38.749 "memory_domains": [ 00:19:38.749 { 00:19:38.749 "dma_device_id": "system", 00:19:38.749 "dma_device_type": 1 00:19:38.749 }, 00:19:38.749 { 00:19:38.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.749 "dma_device_type": 2 00:19:38.749 }, 00:19:38.749 { 00:19:38.749 "dma_device_id": "system", 00:19:38.749 "dma_device_type": 1 00:19:38.749 }, 00:19:38.749 { 00:19:38.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.749 "dma_device_type": 2 00:19:38.749 }, 00:19:38.749 { 00:19:38.749 "dma_device_id": "system", 00:19:38.749 "dma_device_type": 1 00:19:38.749 }, 00:19:38.749 { 00:19:38.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.749 "dma_device_type": 2 00:19:38.749 }, 00:19:38.749 { 00:19:38.749 "dma_device_id": "system", 00:19:38.749 "dma_device_type": 1 00:19:38.749 }, 00:19:38.749 { 00:19:38.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.749 "dma_device_type": 2 00:19:38.749 } 00:19:38.749 ], 00:19:38.749 "driver_specific": { 00:19:38.749 "raid": { 00:19:38.749 "uuid": "67b76004-57c2-4280-b731-2f51385adc74", 00:19:38.749 "strip_size_kb": 0, 00:19:38.749 "state": "online", 00:19:38.749 "raid_level": "raid1", 00:19:38.749 "superblock": true, 00:19:38.749 "num_base_bdevs": 4, 00:19:38.749 "num_base_bdevs_discovered": 4, 00:19:38.749 "num_base_bdevs_operational": 4, 00:19:38.749 "base_bdevs_list": [ 00:19:38.749 { 00:19:38.749 "name": "BaseBdev1", 00:19:38.749 "uuid": "8dd6a7fb-c7cf-41d5-97d5-12b3fbb5b7a3", 00:19:38.749 "is_configured": true, 00:19:38.749 "data_offset": 2048, 00:19:38.749 "data_size": 63488 00:19:38.749 }, 00:19:38.749 { 00:19:38.749 "name": "BaseBdev2", 00:19:38.749 "uuid": "11552a52-7202-442c-a17a-93cd801b1c5f", 00:19:38.749 "is_configured": true, 00:19:38.749 "data_offset": 2048, 00:19:38.749 "data_size": 63488 00:19:38.749 }, 00:19:38.749 { 00:19:38.749 "name": "BaseBdev3", 00:19:38.749 "uuid": "a9d583b3-d6b9-46fc-92ee-3a12bc09fa4a", 00:19:38.749 "is_configured": true, 00:19:38.749 "data_offset": 2048, 00:19:38.749 "data_size": 63488 00:19:38.749 }, 00:19:38.749 { 00:19:38.749 "name": "BaseBdev4", 00:19:38.749 "uuid": "5a0572e2-a081-46de-8b9a-dd16b36c6795", 00:19:38.749 "is_configured": true, 00:19:38.749 "data_offset": 2048, 00:19:38.749 "data_size": 63488 00:19:38.749 } 00:19:38.749 ] 00:19:38.749 } 00:19:38.749 } 00:19:38.749 }' 00:19:38.749 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:39.008 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:39.008 BaseBdev2 00:19:39.008 BaseBdev3 00:19:39.008 BaseBdev4' 00:19:39.008 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:39.008 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:39.008 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:39.266 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:39.266 "name": "BaseBdev1", 00:19:39.266 "aliases": [ 00:19:39.266 "8dd6a7fb-c7cf-41d5-97d5-12b3fbb5b7a3" 00:19:39.266 ], 00:19:39.266 "product_name": "Malloc disk", 00:19:39.266 "block_size": 512, 00:19:39.266 "num_blocks": 65536, 00:19:39.266 "uuid": "8dd6a7fb-c7cf-41d5-97d5-12b3fbb5b7a3", 00:19:39.266 "assigned_rate_limits": { 00:19:39.266 "rw_ios_per_sec": 0, 00:19:39.266 "rw_mbytes_per_sec": 0, 00:19:39.266 "r_mbytes_per_sec": 0, 00:19:39.266 "w_mbytes_per_sec": 0 00:19:39.266 }, 00:19:39.266 "claimed": true, 00:19:39.266 "claim_type": "exclusive_write", 00:19:39.266 "zoned": false, 00:19:39.266 "supported_io_types": { 00:19:39.266 "read": true, 00:19:39.266 "write": true, 00:19:39.266 "unmap": true, 00:19:39.266 "flush": true, 00:19:39.266 "reset": true, 00:19:39.266 "nvme_admin": false, 00:19:39.266 "nvme_io": false, 00:19:39.266 "nvme_io_md": false, 00:19:39.266 "write_zeroes": true, 00:19:39.266 "zcopy": true, 00:19:39.266 "get_zone_info": false, 00:19:39.266 "zone_management": false, 00:19:39.267 "zone_append": false, 00:19:39.267 "compare": false, 00:19:39.267 "compare_and_write": false, 00:19:39.267 "abort": true, 00:19:39.267 "seek_hole": false, 00:19:39.267 "seek_data": false, 00:19:39.267 "copy": true, 00:19:39.267 "nvme_iov_md": false 00:19:39.267 }, 00:19:39.267 "memory_domains": [ 00:19:39.267 { 00:19:39.267 "dma_device_id": "system", 00:19:39.267 "dma_device_type": 1 00:19:39.267 }, 00:19:39.267 { 00:19:39.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.267 "dma_device_type": 2 00:19:39.267 } 00:19:39.267 ], 00:19:39.267 "driver_specific": {} 00:19:39.267 }' 00:19:39.267 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.267 11:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.267 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:39.267 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.267 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:39.525 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:39.525 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.525 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:39.525 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:39.525 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.525 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:39.525 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:39.525 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:39.525 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:39.525 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:39.783 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:39.783 "name": "BaseBdev2", 00:19:39.783 "aliases": [ 00:19:39.783 "11552a52-7202-442c-a17a-93cd801b1c5f" 00:19:39.783 ], 00:19:39.783 "product_name": "Malloc disk", 00:19:39.783 "block_size": 512, 00:19:39.783 "num_blocks": 65536, 00:19:39.783 "uuid": "11552a52-7202-442c-a17a-93cd801b1c5f", 00:19:39.783 "assigned_rate_limits": { 00:19:39.783 "rw_ios_per_sec": 0, 00:19:39.783 "rw_mbytes_per_sec": 0, 00:19:39.783 "r_mbytes_per_sec": 0, 00:19:39.783 "w_mbytes_per_sec": 0 00:19:39.783 }, 00:19:39.783 "claimed": true, 00:19:39.783 "claim_type": "exclusive_write", 00:19:39.783 "zoned": false, 00:19:39.783 "supported_io_types": { 00:19:39.783 "read": true, 00:19:39.783 "write": true, 00:19:39.783 "unmap": true, 00:19:39.783 "flush": true, 00:19:39.783 "reset": true, 00:19:39.783 "nvme_admin": false, 00:19:39.783 "nvme_io": false, 00:19:39.783 "nvme_io_md": false, 00:19:39.783 "write_zeroes": true, 00:19:39.783 "zcopy": true, 00:19:39.783 "get_zone_info": false, 00:19:39.783 "zone_management": false, 00:19:39.783 "zone_append": false, 00:19:39.783 "compare": false, 00:19:39.783 "compare_and_write": false, 00:19:39.783 "abort": true, 00:19:39.783 "seek_hole": false, 00:19:39.783 "seek_data": false, 00:19:39.783 "copy": true, 00:19:39.783 "nvme_iov_md": false 00:19:39.783 }, 00:19:39.783 "memory_domains": [ 00:19:39.783 { 00:19:39.783 "dma_device_id": "system", 00:19:39.783 "dma_device_type": 1 00:19:39.783 }, 00:19:39.783 { 00:19:39.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.783 "dma_device_type": 2 00:19:39.783 } 00:19:39.783 ], 00:19:39.783 "driver_specific": {} 00:19:39.783 }' 00:19:39.783 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.041 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.042 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:40.042 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.042 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.042 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:40.042 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.042 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.299 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:40.299 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.299 11:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.299 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:40.300 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:40.300 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:40.300 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:40.558 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:40.558 "name": "BaseBdev3", 00:19:40.558 "aliases": [ 00:19:40.558 "a9d583b3-d6b9-46fc-92ee-3a12bc09fa4a" 00:19:40.558 ], 00:19:40.558 "product_name": "Malloc disk", 00:19:40.558 "block_size": 512, 00:19:40.559 "num_blocks": 65536, 00:19:40.559 "uuid": "a9d583b3-d6b9-46fc-92ee-3a12bc09fa4a", 00:19:40.559 "assigned_rate_limits": { 00:19:40.559 "rw_ios_per_sec": 0, 00:19:40.559 "rw_mbytes_per_sec": 0, 00:19:40.559 "r_mbytes_per_sec": 0, 00:19:40.559 "w_mbytes_per_sec": 0 00:19:40.559 }, 00:19:40.559 "claimed": true, 00:19:40.559 "claim_type": "exclusive_write", 00:19:40.559 "zoned": false, 00:19:40.559 "supported_io_types": { 00:19:40.559 "read": true, 00:19:40.559 "write": true, 00:19:40.559 "unmap": true, 00:19:40.559 "flush": true, 00:19:40.559 "reset": true, 00:19:40.559 "nvme_admin": false, 00:19:40.559 "nvme_io": false, 00:19:40.559 "nvme_io_md": false, 00:19:40.559 "write_zeroes": true, 00:19:40.559 "zcopy": true, 00:19:40.559 "get_zone_info": false, 00:19:40.559 "zone_management": false, 00:19:40.559 "zone_append": false, 00:19:40.559 "compare": false, 00:19:40.559 "compare_and_write": false, 00:19:40.559 "abort": true, 00:19:40.559 "seek_hole": false, 00:19:40.559 "seek_data": false, 00:19:40.559 "copy": true, 00:19:40.559 "nvme_iov_md": false 00:19:40.559 }, 00:19:40.559 "memory_domains": [ 00:19:40.559 { 00:19:40.559 "dma_device_id": "system", 00:19:40.559 "dma_device_type": 1 00:19:40.559 }, 00:19:40.559 { 00:19:40.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.559 "dma_device_type": 2 00:19:40.559 } 00:19:40.559 ], 00:19:40.559 "driver_specific": {} 00:19:40.559 }' 00:19:40.559 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.559 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.817 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:40.817 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.817 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.817 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:40.817 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.817 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.817 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:40.817 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.076 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.076 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:41.076 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:41.076 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:41.076 11:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:41.334 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:41.334 "name": "BaseBdev4", 00:19:41.334 "aliases": [ 00:19:41.334 "5a0572e2-a081-46de-8b9a-dd16b36c6795" 00:19:41.334 ], 00:19:41.334 "product_name": "Malloc disk", 00:19:41.334 "block_size": 512, 00:19:41.334 "num_blocks": 65536, 00:19:41.334 "uuid": "5a0572e2-a081-46de-8b9a-dd16b36c6795", 00:19:41.334 "assigned_rate_limits": { 00:19:41.334 "rw_ios_per_sec": 0, 00:19:41.334 "rw_mbytes_per_sec": 0, 00:19:41.334 "r_mbytes_per_sec": 0, 00:19:41.334 "w_mbytes_per_sec": 0 00:19:41.334 }, 00:19:41.334 "claimed": true, 00:19:41.335 "claim_type": "exclusive_write", 00:19:41.335 "zoned": false, 00:19:41.335 "supported_io_types": { 00:19:41.335 "read": true, 00:19:41.335 "write": true, 00:19:41.335 "unmap": true, 00:19:41.335 "flush": true, 00:19:41.335 "reset": true, 00:19:41.335 "nvme_admin": false, 00:19:41.335 "nvme_io": false, 00:19:41.335 "nvme_io_md": false, 00:19:41.335 "write_zeroes": true, 00:19:41.335 "zcopy": true, 00:19:41.335 "get_zone_info": false, 00:19:41.335 "zone_management": false, 00:19:41.335 "zone_append": false, 00:19:41.335 "compare": false, 00:19:41.335 "compare_and_write": false, 00:19:41.335 "abort": true, 00:19:41.335 "seek_hole": false, 00:19:41.335 "seek_data": false, 00:19:41.335 "copy": true, 00:19:41.335 "nvme_iov_md": false 00:19:41.335 }, 00:19:41.335 "memory_domains": [ 00:19:41.335 { 00:19:41.335 "dma_device_id": "system", 00:19:41.335 "dma_device_type": 1 00:19:41.335 }, 00:19:41.335 { 00:19:41.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.335 "dma_device_type": 2 00:19:41.335 } 00:19:41.335 ], 00:19:41.335 "driver_specific": {} 00:19:41.335 }' 00:19:41.335 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.335 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.335 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:41.335 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.335 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.593 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:41.593 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.593 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.593 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:41.593 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.593 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.593 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:41.593 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:41.851 [2024-07-25 11:30:57.697548] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.112 11:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.369 11:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.369 "name": "Existed_Raid", 00:19:42.369 "uuid": "67b76004-57c2-4280-b731-2f51385adc74", 00:19:42.369 "strip_size_kb": 0, 00:19:42.369 "state": "online", 00:19:42.369 "raid_level": "raid1", 00:19:42.369 "superblock": true, 00:19:42.369 "num_base_bdevs": 4, 00:19:42.369 "num_base_bdevs_discovered": 3, 00:19:42.369 "num_base_bdevs_operational": 3, 00:19:42.369 "base_bdevs_list": [ 00:19:42.369 { 00:19:42.369 "name": null, 00:19:42.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.369 "is_configured": false, 00:19:42.369 "data_offset": 2048, 00:19:42.369 "data_size": 63488 00:19:42.369 }, 00:19:42.369 { 00:19:42.369 "name": "BaseBdev2", 00:19:42.369 "uuid": "11552a52-7202-442c-a17a-93cd801b1c5f", 00:19:42.369 "is_configured": true, 00:19:42.369 "data_offset": 2048, 00:19:42.369 "data_size": 63488 00:19:42.369 }, 00:19:42.369 { 00:19:42.369 "name": "BaseBdev3", 00:19:42.369 "uuid": "a9d583b3-d6b9-46fc-92ee-3a12bc09fa4a", 00:19:42.369 "is_configured": true, 00:19:42.369 "data_offset": 2048, 00:19:42.369 "data_size": 63488 00:19:42.369 }, 00:19:42.369 { 00:19:42.369 "name": "BaseBdev4", 00:19:42.369 "uuid": "5a0572e2-a081-46de-8b9a-dd16b36c6795", 00:19:42.369 "is_configured": true, 00:19:42.369 "data_offset": 2048, 00:19:42.369 "data_size": 63488 00:19:42.369 } 00:19:42.369 ] 00:19:42.369 }' 00:19:42.369 11:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.369 11:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.950 11:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:42.950 11:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:42.950 11:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:42.950 11:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.207 11:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:43.207 11:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:43.207 11:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:43.465 [2024-07-25 11:30:59.293858] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:43.723 11:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:43.723 11:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:43.723 11:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.723 11:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:43.980 11:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:43.980 11:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:43.980 11:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:44.238 [2024-07-25 11:30:59.892554] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:44.238 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:44.238 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:44.238 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:44.238 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.497 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:44.497 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:44.497 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:44.756 [2024-07-25 11:31:00.576201] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:44.756 [2024-07-25 11:31:00.576393] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:45.014 [2024-07-25 11:31:00.667611] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.014 [2024-07-25 11:31:00.667762] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.014 [2024-07-25 11:31:00.667781] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:45.014 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:45.014 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:45.014 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.014 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:45.272 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:45.272 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:45.272 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:45.272 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:45.272 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:45.272 11:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:45.531 BaseBdev2 00:19:45.531 11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:45.531 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:45.531 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:45.531 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:45.531 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:45.531 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:45.531 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:45.789 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:46.048 [ 00:19:46.048 { 00:19:46.048 "name": "BaseBdev2", 00:19:46.048 "aliases": [ 00:19:46.048 "cce860bf-70ab-4d91-9ebf-4989ae0663d7" 00:19:46.048 ], 00:19:46.048 "product_name": "Malloc disk", 00:19:46.048 "block_size": 512, 00:19:46.048 "num_blocks": 65536, 00:19:46.048 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:19:46.048 "assigned_rate_limits": { 00:19:46.048 "rw_ios_per_sec": 0, 00:19:46.048 "rw_mbytes_per_sec": 0, 00:19:46.048 "r_mbytes_per_sec": 0, 00:19:46.048 "w_mbytes_per_sec": 0 00:19:46.048 }, 00:19:46.048 "claimed": false, 00:19:46.048 "zoned": false, 00:19:46.048 "supported_io_types": { 00:19:46.048 "read": true, 00:19:46.048 "write": true, 00:19:46.048 "unmap": true, 00:19:46.048 "flush": true, 00:19:46.048 "reset": true, 00:19:46.048 "nvme_admin": false, 00:19:46.048 "nvme_io": false, 00:19:46.048 "nvme_io_md": false, 00:19:46.048 "write_zeroes": true, 00:19:46.048 "zcopy": true, 00:19:46.048 "get_zone_info": false, 00:19:46.048 "zone_management": false, 00:19:46.048 "zone_append": false, 00:19:46.048 "compare": false, 00:19:46.048 "compare_and_write": false, 00:19:46.048 "abort": true, 00:19:46.048 "seek_hole": false, 00:19:46.048 "seek_data": false, 00:19:46.048 "copy": true, 00:19:46.048 "nvme_iov_md": false 00:19:46.048 }, 00:19:46.048 "memory_domains": [ 00:19:46.048 { 00:19:46.048 "dma_device_id": "system", 00:19:46.048 "dma_device_type": 1 00:19:46.048 }, 00:19:46.048 { 00:19:46.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.048 "dma_device_type": 2 00:19:46.048 } 00:19:46.048 ], 00:19:46.048 "driver_specific": {} 00:19:46.048 } 00:19:46.048 ] 00:19:46.048 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:46.048 11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:46.048 11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:46.048 11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:46.307 BaseBdev3 00:19:46.307 11:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:46.307 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:46.307 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:46.307 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:46.307 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:46.307 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:46.307 11:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:46.565 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:46.824 [ 00:19:46.824 { 00:19:46.824 "name": "BaseBdev3", 00:19:46.824 "aliases": [ 00:19:46.824 "85436d92-4117-4aaa-8abc-1ad877aa0f7f" 00:19:46.824 ], 00:19:46.824 "product_name": "Malloc disk", 00:19:46.824 "block_size": 512, 00:19:46.824 "num_blocks": 65536, 00:19:46.824 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:19:46.824 "assigned_rate_limits": { 00:19:46.824 "rw_ios_per_sec": 0, 00:19:46.824 "rw_mbytes_per_sec": 0, 00:19:46.824 "r_mbytes_per_sec": 0, 00:19:46.824 "w_mbytes_per_sec": 0 00:19:46.824 }, 00:19:46.824 "claimed": false, 00:19:46.824 "zoned": false, 00:19:46.824 "supported_io_types": { 00:19:46.824 "read": true, 00:19:46.824 "write": true, 00:19:46.824 "unmap": true, 00:19:46.824 "flush": true, 00:19:46.824 "reset": true, 00:19:46.824 "nvme_admin": false, 00:19:46.824 "nvme_io": false, 00:19:46.824 "nvme_io_md": false, 00:19:46.824 "write_zeroes": true, 00:19:46.824 "zcopy": true, 00:19:46.824 "get_zone_info": false, 00:19:46.824 "zone_management": false, 00:19:46.824 "zone_append": false, 00:19:46.824 "compare": false, 00:19:46.824 "compare_and_write": false, 00:19:46.824 "abort": true, 00:19:46.824 "seek_hole": false, 00:19:46.824 "seek_data": false, 00:19:46.824 "copy": true, 00:19:46.824 "nvme_iov_md": false 00:19:46.824 }, 00:19:46.824 "memory_domains": [ 00:19:46.824 { 00:19:46.824 "dma_device_id": "system", 00:19:46.824 "dma_device_type": 1 00:19:46.824 }, 00:19:46.824 { 00:19:46.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.824 "dma_device_type": 2 00:19:46.824 } 00:19:46.824 ], 00:19:46.824 "driver_specific": {} 00:19:46.824 } 00:19:46.824 ] 00:19:46.824 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:46.824 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:46.824 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:46.824 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:47.083 BaseBdev4 00:19:47.083 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:47.084 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:47.084 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:47.084 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:47.084 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:47.084 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:47.084 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:47.342 11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:47.601 [ 00:19:47.601 { 00:19:47.601 "name": "BaseBdev4", 00:19:47.601 "aliases": [ 00:19:47.601 "93fc3d62-9a74-47af-8f67-1ac3331b59ce" 00:19:47.601 ], 00:19:47.601 "product_name": "Malloc disk", 00:19:47.601 "block_size": 512, 00:19:47.601 "num_blocks": 65536, 00:19:47.601 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:19:47.601 "assigned_rate_limits": { 00:19:47.601 "rw_ios_per_sec": 0, 00:19:47.601 "rw_mbytes_per_sec": 0, 00:19:47.601 "r_mbytes_per_sec": 0, 00:19:47.601 "w_mbytes_per_sec": 0 00:19:47.601 }, 00:19:47.601 "claimed": false, 00:19:47.601 "zoned": false, 00:19:47.601 "supported_io_types": { 00:19:47.601 "read": true, 00:19:47.601 "write": true, 00:19:47.601 "unmap": true, 00:19:47.601 "flush": true, 00:19:47.601 "reset": true, 00:19:47.601 "nvme_admin": false, 00:19:47.601 "nvme_io": false, 00:19:47.601 "nvme_io_md": false, 00:19:47.601 "write_zeroes": true, 00:19:47.601 "zcopy": true, 00:19:47.601 "get_zone_info": false, 00:19:47.601 "zone_management": false, 00:19:47.601 "zone_append": false, 00:19:47.601 "compare": false, 00:19:47.601 "compare_and_write": false, 00:19:47.601 "abort": true, 00:19:47.601 "seek_hole": false, 00:19:47.601 "seek_data": false, 00:19:47.601 "copy": true, 00:19:47.601 "nvme_iov_md": false 00:19:47.601 }, 00:19:47.601 "memory_domains": [ 00:19:47.601 { 00:19:47.601 "dma_device_id": "system", 00:19:47.601 "dma_device_type": 1 00:19:47.601 }, 00:19:47.601 { 00:19:47.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.601 "dma_device_type": 2 00:19:47.601 } 00:19:47.601 ], 00:19:47.601 "driver_specific": {} 00:19:47.601 } 00:19:47.601 ] 00:19:47.601 11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:47.601 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:47.601 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:47.601 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:47.860 [2024-07-25 11:31:03.599559] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:47.860 [2024-07-25 11:31:03.599665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:47.860 [2024-07-25 11:31:03.599725] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.860 [2024-07-25 11:31:03.602314] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:47.860 [2024-07-25 11:31:03.602399] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.860 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.119 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.119 "name": "Existed_Raid", 00:19:48.119 "uuid": "ae56406c-24c7-40cf-bb65-cb600018cff8", 00:19:48.119 "strip_size_kb": 0, 00:19:48.119 "state": "configuring", 00:19:48.119 "raid_level": "raid1", 00:19:48.119 "superblock": true, 00:19:48.119 "num_base_bdevs": 4, 00:19:48.119 "num_base_bdevs_discovered": 3, 00:19:48.119 "num_base_bdevs_operational": 4, 00:19:48.119 "base_bdevs_list": [ 00:19:48.119 { 00:19:48.119 "name": "BaseBdev1", 00:19:48.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.119 "is_configured": false, 00:19:48.119 "data_offset": 0, 00:19:48.119 "data_size": 0 00:19:48.119 }, 00:19:48.119 { 00:19:48.119 "name": "BaseBdev2", 00:19:48.119 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:19:48.119 "is_configured": true, 00:19:48.119 "data_offset": 2048, 00:19:48.119 "data_size": 63488 00:19:48.119 }, 00:19:48.119 { 00:19:48.119 "name": "BaseBdev3", 00:19:48.119 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:19:48.119 "is_configured": true, 00:19:48.119 "data_offset": 2048, 00:19:48.119 "data_size": 63488 00:19:48.119 }, 00:19:48.119 { 00:19:48.119 "name": "BaseBdev4", 00:19:48.119 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:19:48.119 "is_configured": true, 00:19:48.119 "data_offset": 2048, 00:19:48.119 "data_size": 63488 00:19:48.119 } 00:19:48.119 ] 00:19:48.119 }' 00:19:48.119 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.119 11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.685 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:48.943 [2024-07-25 11:31:04.743800] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.943 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.201 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.201 "name": "Existed_Raid", 00:19:49.201 "uuid": "ae56406c-24c7-40cf-bb65-cb600018cff8", 00:19:49.201 "strip_size_kb": 0, 00:19:49.201 "state": "configuring", 00:19:49.201 "raid_level": "raid1", 00:19:49.201 "superblock": true, 00:19:49.201 "num_base_bdevs": 4, 00:19:49.201 "num_base_bdevs_discovered": 2, 00:19:49.201 "num_base_bdevs_operational": 4, 00:19:49.201 "base_bdevs_list": [ 00:19:49.201 { 00:19:49.201 "name": "BaseBdev1", 00:19:49.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.201 "is_configured": false, 00:19:49.201 "data_offset": 0, 00:19:49.201 "data_size": 0 00:19:49.201 }, 00:19:49.201 { 00:19:49.201 "name": null, 00:19:49.201 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:19:49.201 "is_configured": false, 00:19:49.201 "data_offset": 2048, 00:19:49.201 "data_size": 63488 00:19:49.201 }, 00:19:49.201 { 00:19:49.201 "name": "BaseBdev3", 00:19:49.201 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:19:49.201 "is_configured": true, 00:19:49.201 "data_offset": 2048, 00:19:49.201 "data_size": 63488 00:19:49.201 }, 00:19:49.201 { 00:19:49.201 "name": "BaseBdev4", 00:19:49.201 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:19:49.201 "is_configured": true, 00:19:49.201 "data_offset": 2048, 00:19:49.201 "data_size": 63488 00:19:49.201 } 00:19:49.201 ] 00:19:49.201 }' 00:19:49.201 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.201 11:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.767 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.767 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:50.062 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:50.062 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:50.320 [2024-07-25 11:31:06.176276] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:50.320 BaseBdev1 00:19:50.320 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:50.320 11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:50.320 11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:50.320 11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:50.320 11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:50.320 11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:50.320 11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:50.885 11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:50.885 [ 00:19:50.885 { 00:19:50.885 "name": "BaseBdev1", 00:19:50.885 "aliases": [ 00:19:50.885 "cd7d0c8d-cad0-4a00-a187-ad377628b720" 00:19:50.885 ], 00:19:50.885 "product_name": "Malloc disk", 00:19:50.885 "block_size": 512, 00:19:50.885 "num_blocks": 65536, 00:19:50.885 "uuid": "cd7d0c8d-cad0-4a00-a187-ad377628b720", 00:19:50.885 "assigned_rate_limits": { 00:19:50.885 "rw_ios_per_sec": 0, 00:19:50.885 "rw_mbytes_per_sec": 0, 00:19:50.885 "r_mbytes_per_sec": 0, 00:19:50.885 "w_mbytes_per_sec": 0 00:19:50.885 }, 00:19:50.885 "claimed": true, 00:19:50.885 "claim_type": "exclusive_write", 00:19:50.885 "zoned": false, 00:19:50.885 "supported_io_types": { 00:19:50.885 "read": true, 00:19:50.885 "write": true, 00:19:50.885 "unmap": true, 00:19:50.885 "flush": true, 00:19:50.885 "reset": true, 00:19:50.885 "nvme_admin": false, 00:19:50.886 "nvme_io": false, 00:19:50.886 "nvme_io_md": false, 00:19:50.886 "write_zeroes": true, 00:19:50.886 "zcopy": true, 00:19:50.886 "get_zone_info": false, 00:19:50.886 "zone_management": false, 00:19:50.886 "zone_append": false, 00:19:50.886 "compare": false, 00:19:50.886 "compare_and_write": false, 00:19:50.886 "abort": true, 00:19:50.886 "seek_hole": false, 00:19:50.886 "seek_data": false, 00:19:50.886 "copy": true, 00:19:50.886 "nvme_iov_md": false 00:19:50.886 }, 00:19:50.886 "memory_domains": [ 00:19:50.886 { 00:19:50.886 "dma_device_id": "system", 00:19:50.886 "dma_device_type": 1 00:19:50.886 }, 00:19:50.886 { 00:19:50.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.886 "dma_device_type": 2 00:19:50.886 } 00:19:50.886 ], 00:19:50.886 "driver_specific": {} 00:19:50.886 } 00:19:50.886 ] 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.886 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.144 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.144 "name": "Existed_Raid", 00:19:51.144 "uuid": "ae56406c-24c7-40cf-bb65-cb600018cff8", 00:19:51.144 "strip_size_kb": 0, 00:19:51.144 "state": "configuring", 00:19:51.144 "raid_level": "raid1", 00:19:51.144 "superblock": true, 00:19:51.144 "num_base_bdevs": 4, 00:19:51.144 "num_base_bdevs_discovered": 3, 00:19:51.144 "num_base_bdevs_operational": 4, 00:19:51.144 "base_bdevs_list": [ 00:19:51.144 { 00:19:51.144 "name": "BaseBdev1", 00:19:51.144 "uuid": "cd7d0c8d-cad0-4a00-a187-ad377628b720", 00:19:51.144 "is_configured": true, 00:19:51.144 "data_offset": 2048, 00:19:51.144 "data_size": 63488 00:19:51.144 }, 00:19:51.144 { 00:19:51.144 "name": null, 00:19:51.144 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:19:51.144 "is_configured": false, 00:19:51.144 "data_offset": 2048, 00:19:51.144 "data_size": 63488 00:19:51.144 }, 00:19:51.144 { 00:19:51.144 "name": "BaseBdev3", 00:19:51.144 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:19:51.144 "is_configured": true, 00:19:51.144 "data_offset": 2048, 00:19:51.144 "data_size": 63488 00:19:51.144 }, 00:19:51.144 { 00:19:51.144 "name": "BaseBdev4", 00:19:51.144 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:19:51.144 "is_configured": true, 00:19:51.144 "data_offset": 2048, 00:19:51.144 "data_size": 63488 00:19:51.144 } 00:19:51.144 ] 00:19:51.144 }' 00:19:51.144 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.144 11:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.077 11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.077 11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:52.077 11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:52.077 11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:52.335 [2024-07-25 11:31:08.096950] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.335 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.594 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:52.594 "name": "Existed_Raid", 00:19:52.594 "uuid": "ae56406c-24c7-40cf-bb65-cb600018cff8", 00:19:52.594 "strip_size_kb": 0, 00:19:52.594 "state": "configuring", 00:19:52.594 "raid_level": "raid1", 00:19:52.594 "superblock": true, 00:19:52.594 "num_base_bdevs": 4, 00:19:52.594 "num_base_bdevs_discovered": 2, 00:19:52.594 "num_base_bdevs_operational": 4, 00:19:52.594 "base_bdevs_list": [ 00:19:52.594 { 00:19:52.594 "name": "BaseBdev1", 00:19:52.594 "uuid": "cd7d0c8d-cad0-4a00-a187-ad377628b720", 00:19:52.594 "is_configured": true, 00:19:52.594 "data_offset": 2048, 00:19:52.594 "data_size": 63488 00:19:52.594 }, 00:19:52.594 { 00:19:52.594 "name": null, 00:19:52.594 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:19:52.594 "is_configured": false, 00:19:52.594 "data_offset": 2048, 00:19:52.594 "data_size": 63488 00:19:52.594 }, 00:19:52.594 { 00:19:52.594 "name": null, 00:19:52.594 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:19:52.594 "is_configured": false, 00:19:52.594 "data_offset": 2048, 00:19:52.594 "data_size": 63488 00:19:52.594 }, 00:19:52.594 { 00:19:52.594 "name": "BaseBdev4", 00:19:52.594 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:19:52.594 "is_configured": true, 00:19:52.594 "data_offset": 2048, 00:19:52.594 "data_size": 63488 00:19:52.594 } 00:19:52.594 ] 00:19:52.594 }' 00:19:52.594 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:52.594 11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.529 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.529 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:53.529 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:53.529 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:53.788 [2024-07-25 11:31:09.625401] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.788 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.046 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:54.046 "name": "Existed_Raid", 00:19:54.046 "uuid": "ae56406c-24c7-40cf-bb65-cb600018cff8", 00:19:54.046 "strip_size_kb": 0, 00:19:54.046 "state": "configuring", 00:19:54.046 "raid_level": "raid1", 00:19:54.046 "superblock": true, 00:19:54.046 "num_base_bdevs": 4, 00:19:54.046 "num_base_bdevs_discovered": 3, 00:19:54.046 "num_base_bdevs_operational": 4, 00:19:54.046 "base_bdevs_list": [ 00:19:54.046 { 00:19:54.046 "name": "BaseBdev1", 00:19:54.046 "uuid": "cd7d0c8d-cad0-4a00-a187-ad377628b720", 00:19:54.046 "is_configured": true, 00:19:54.046 "data_offset": 2048, 00:19:54.046 "data_size": 63488 00:19:54.046 }, 00:19:54.046 { 00:19:54.046 "name": null, 00:19:54.046 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:19:54.046 "is_configured": false, 00:19:54.046 "data_offset": 2048, 00:19:54.046 "data_size": 63488 00:19:54.046 }, 00:19:54.046 { 00:19:54.046 "name": "BaseBdev3", 00:19:54.046 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:19:54.046 "is_configured": true, 00:19:54.046 "data_offset": 2048, 00:19:54.046 "data_size": 63488 00:19:54.046 }, 00:19:54.046 { 00:19:54.046 "name": "BaseBdev4", 00:19:54.046 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:19:54.046 "is_configured": true, 00:19:54.046 "data_offset": 2048, 00:19:54.046 "data_size": 63488 00:19:54.046 } 00:19:54.046 ] 00:19:54.046 }' 00:19:54.046 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:54.046 11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.980 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.980 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:54.980 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:54.980 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:55.237 [2024-07-25 11:31:11.045807] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.494 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.753 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.753 "name": "Existed_Raid", 00:19:55.753 "uuid": "ae56406c-24c7-40cf-bb65-cb600018cff8", 00:19:55.753 "strip_size_kb": 0, 00:19:55.753 "state": "configuring", 00:19:55.753 "raid_level": "raid1", 00:19:55.753 "superblock": true, 00:19:55.753 "num_base_bdevs": 4, 00:19:55.753 "num_base_bdevs_discovered": 2, 00:19:55.753 "num_base_bdevs_operational": 4, 00:19:55.753 "base_bdevs_list": [ 00:19:55.753 { 00:19:55.753 "name": null, 00:19:55.753 "uuid": "cd7d0c8d-cad0-4a00-a187-ad377628b720", 00:19:55.753 "is_configured": false, 00:19:55.753 "data_offset": 2048, 00:19:55.753 "data_size": 63488 00:19:55.753 }, 00:19:55.753 { 00:19:55.753 "name": null, 00:19:55.753 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:19:55.753 "is_configured": false, 00:19:55.753 "data_offset": 2048, 00:19:55.753 "data_size": 63488 00:19:55.753 }, 00:19:55.753 { 00:19:55.753 "name": "BaseBdev3", 00:19:55.753 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:19:55.753 "is_configured": true, 00:19:55.753 "data_offset": 2048, 00:19:55.753 "data_size": 63488 00:19:55.753 }, 00:19:55.753 { 00:19:55.753 "name": "BaseBdev4", 00:19:55.753 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:19:55.753 "is_configured": true, 00:19:55.753 "data_offset": 2048, 00:19:55.753 "data_size": 63488 00:19:55.753 } 00:19:55.753 ] 00:19:55.753 }' 00:19:55.753 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.753 11:31:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.318 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.318 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:56.577 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:56.577 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:56.860 [2024-07-25 11:31:12.615175] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.860 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.122 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.122 "name": "Existed_Raid", 00:19:57.122 "uuid": "ae56406c-24c7-40cf-bb65-cb600018cff8", 00:19:57.122 "strip_size_kb": 0, 00:19:57.122 "state": "configuring", 00:19:57.122 "raid_level": "raid1", 00:19:57.122 "superblock": true, 00:19:57.122 "num_base_bdevs": 4, 00:19:57.122 "num_base_bdevs_discovered": 3, 00:19:57.122 "num_base_bdevs_operational": 4, 00:19:57.122 "base_bdevs_list": [ 00:19:57.122 { 00:19:57.122 "name": null, 00:19:57.122 "uuid": "cd7d0c8d-cad0-4a00-a187-ad377628b720", 00:19:57.122 "is_configured": false, 00:19:57.122 "data_offset": 2048, 00:19:57.122 "data_size": 63488 00:19:57.122 }, 00:19:57.122 { 00:19:57.122 "name": "BaseBdev2", 00:19:57.122 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:19:57.122 "is_configured": true, 00:19:57.122 "data_offset": 2048, 00:19:57.122 "data_size": 63488 00:19:57.122 }, 00:19:57.122 { 00:19:57.122 "name": "BaseBdev3", 00:19:57.122 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:19:57.122 "is_configured": true, 00:19:57.122 "data_offset": 2048, 00:19:57.122 "data_size": 63488 00:19:57.122 }, 00:19:57.122 { 00:19:57.122 "name": "BaseBdev4", 00:19:57.122 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:19:57.122 "is_configured": true, 00:19:57.122 "data_offset": 2048, 00:19:57.122 "data_size": 63488 00:19:57.122 } 00:19:57.122 ] 00:19:57.122 }' 00:19:57.122 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.122 11:31:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.063 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.063 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:58.063 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:58.063 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:58.063 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.321 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u cd7d0c8d-cad0-4a00-a187-ad377628b720 00:19:58.580 [2024-07-25 11:31:14.417104] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:58.580 [2024-07-25 11:31:14.417455] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:58.580 [2024-07-25 11:31:14.417473] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:58.580 [2024-07-25 11:31:14.417850] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:58.580 [2024-07-25 11:31:14.418030] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:58.580 [2024-07-25 11:31:14.418051] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:58.580 [2024-07-25 11:31:14.418233] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.580 NewBaseBdev 00:19:58.580 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:58.580 11:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:58.580 11:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:58.580 11:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:58.580 11:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:58.580 11:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:58.580 11:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.837 11:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:59.096 [ 00:19:59.096 { 00:19:59.096 "name": "NewBaseBdev", 00:19:59.096 "aliases": [ 00:19:59.096 "cd7d0c8d-cad0-4a00-a187-ad377628b720" 00:19:59.096 ], 00:19:59.096 "product_name": "Malloc disk", 00:19:59.096 "block_size": 512, 00:19:59.096 "num_blocks": 65536, 00:19:59.096 "uuid": "cd7d0c8d-cad0-4a00-a187-ad377628b720", 00:19:59.096 "assigned_rate_limits": { 00:19:59.096 "rw_ios_per_sec": 0, 00:19:59.096 "rw_mbytes_per_sec": 0, 00:19:59.096 "r_mbytes_per_sec": 0, 00:19:59.096 "w_mbytes_per_sec": 0 00:19:59.096 }, 00:19:59.096 "claimed": true, 00:19:59.096 "claim_type": "exclusive_write", 00:19:59.096 "zoned": false, 00:19:59.096 "supported_io_types": { 00:19:59.096 "read": true, 00:19:59.096 "write": true, 00:19:59.096 "unmap": true, 00:19:59.096 "flush": true, 00:19:59.096 "reset": true, 00:19:59.096 "nvme_admin": false, 00:19:59.096 "nvme_io": false, 00:19:59.096 "nvme_io_md": false, 00:19:59.096 "write_zeroes": true, 00:19:59.096 "zcopy": true, 00:19:59.096 "get_zone_info": false, 00:19:59.096 "zone_management": false, 00:19:59.096 "zone_append": false, 00:19:59.096 "compare": false, 00:19:59.096 "compare_and_write": false, 00:19:59.096 "abort": true, 00:19:59.096 "seek_hole": false, 00:19:59.096 "seek_data": false, 00:19:59.096 "copy": true, 00:19:59.096 "nvme_iov_md": false 00:19:59.096 }, 00:19:59.096 "memory_domains": [ 00:19:59.096 { 00:19:59.096 "dma_device_id": "system", 00:19:59.096 "dma_device_type": 1 00:19:59.096 }, 00:19:59.096 { 00:19:59.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.096 "dma_device_type": 2 00:19:59.096 } 00:19:59.096 ], 00:19:59.096 "driver_specific": {} 00:19:59.096 } 00:19:59.096 ] 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.096 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.354 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.354 "name": "Existed_Raid", 00:19:59.354 "uuid": "ae56406c-24c7-40cf-bb65-cb600018cff8", 00:19:59.354 "strip_size_kb": 0, 00:19:59.354 "state": "online", 00:19:59.354 "raid_level": "raid1", 00:19:59.354 "superblock": true, 00:19:59.354 "num_base_bdevs": 4, 00:19:59.354 "num_base_bdevs_discovered": 4, 00:19:59.354 "num_base_bdevs_operational": 4, 00:19:59.354 "base_bdevs_list": [ 00:19:59.354 { 00:19:59.354 "name": "NewBaseBdev", 00:19:59.354 "uuid": "cd7d0c8d-cad0-4a00-a187-ad377628b720", 00:19:59.354 "is_configured": true, 00:19:59.354 "data_offset": 2048, 00:19:59.354 "data_size": 63488 00:19:59.354 }, 00:19:59.354 { 00:19:59.354 "name": "BaseBdev2", 00:19:59.354 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:19:59.354 "is_configured": true, 00:19:59.354 "data_offset": 2048, 00:19:59.354 "data_size": 63488 00:19:59.354 }, 00:19:59.354 { 00:19:59.354 "name": "BaseBdev3", 00:19:59.354 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:19:59.354 "is_configured": true, 00:19:59.354 "data_offset": 2048, 00:19:59.354 "data_size": 63488 00:19:59.354 }, 00:19:59.354 { 00:19:59.354 "name": "BaseBdev4", 00:19:59.354 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:19:59.354 "is_configured": true, 00:19:59.354 "data_offset": 2048, 00:19:59.354 "data_size": 63488 00:19:59.354 } 00:19:59.354 ] 00:19:59.354 }' 00:19:59.354 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.354 11:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.289 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:00.289 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:00.289 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:00.289 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:00.289 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:00.289 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:00.289 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:00.289 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:00.289 [2024-07-25 11:31:16.082114] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:00.289 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:00.289 "name": "Existed_Raid", 00:20:00.289 "aliases": [ 00:20:00.289 "ae56406c-24c7-40cf-bb65-cb600018cff8" 00:20:00.289 ], 00:20:00.289 "product_name": "Raid Volume", 00:20:00.289 "block_size": 512, 00:20:00.289 "num_blocks": 63488, 00:20:00.289 "uuid": "ae56406c-24c7-40cf-bb65-cb600018cff8", 00:20:00.289 "assigned_rate_limits": { 00:20:00.289 "rw_ios_per_sec": 0, 00:20:00.289 "rw_mbytes_per_sec": 0, 00:20:00.289 "r_mbytes_per_sec": 0, 00:20:00.289 "w_mbytes_per_sec": 0 00:20:00.289 }, 00:20:00.289 "claimed": false, 00:20:00.289 "zoned": false, 00:20:00.289 "supported_io_types": { 00:20:00.289 "read": true, 00:20:00.289 "write": true, 00:20:00.289 "unmap": false, 00:20:00.289 "flush": false, 00:20:00.289 "reset": true, 00:20:00.289 "nvme_admin": false, 00:20:00.289 "nvme_io": false, 00:20:00.289 "nvme_io_md": false, 00:20:00.289 "write_zeroes": true, 00:20:00.289 "zcopy": false, 00:20:00.289 "get_zone_info": false, 00:20:00.289 "zone_management": false, 00:20:00.289 "zone_append": false, 00:20:00.289 "compare": false, 00:20:00.289 "compare_and_write": false, 00:20:00.289 "abort": false, 00:20:00.289 "seek_hole": false, 00:20:00.289 "seek_data": false, 00:20:00.289 "copy": false, 00:20:00.289 "nvme_iov_md": false 00:20:00.289 }, 00:20:00.289 "memory_domains": [ 00:20:00.289 { 00:20:00.289 "dma_device_id": "system", 00:20:00.289 "dma_device_type": 1 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.289 "dma_device_type": 2 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "dma_device_id": "system", 00:20:00.289 "dma_device_type": 1 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.289 "dma_device_type": 2 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "dma_device_id": "system", 00:20:00.289 "dma_device_type": 1 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.289 "dma_device_type": 2 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "dma_device_id": "system", 00:20:00.289 "dma_device_type": 1 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.289 "dma_device_type": 2 00:20:00.289 } 00:20:00.289 ], 00:20:00.289 "driver_specific": { 00:20:00.289 "raid": { 00:20:00.289 "uuid": "ae56406c-24c7-40cf-bb65-cb600018cff8", 00:20:00.289 "strip_size_kb": 0, 00:20:00.289 "state": "online", 00:20:00.289 "raid_level": "raid1", 00:20:00.289 "superblock": true, 00:20:00.289 "num_base_bdevs": 4, 00:20:00.289 "num_base_bdevs_discovered": 4, 00:20:00.289 "num_base_bdevs_operational": 4, 00:20:00.289 "base_bdevs_list": [ 00:20:00.289 { 00:20:00.289 "name": "NewBaseBdev", 00:20:00.289 "uuid": "cd7d0c8d-cad0-4a00-a187-ad377628b720", 00:20:00.289 "is_configured": true, 00:20:00.289 "data_offset": 2048, 00:20:00.289 "data_size": 63488 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "name": "BaseBdev2", 00:20:00.289 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:20:00.289 "is_configured": true, 00:20:00.289 "data_offset": 2048, 00:20:00.289 "data_size": 63488 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "name": "BaseBdev3", 00:20:00.289 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:20:00.289 "is_configured": true, 00:20:00.289 "data_offset": 2048, 00:20:00.289 "data_size": 63488 00:20:00.289 }, 00:20:00.289 { 00:20:00.289 "name": "BaseBdev4", 00:20:00.289 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:20:00.289 "is_configured": true, 00:20:00.289 "data_offset": 2048, 00:20:00.289 "data_size": 63488 00:20:00.289 } 00:20:00.289 ] 00:20:00.289 } 00:20:00.289 } 00:20:00.289 }' 00:20:00.289 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:00.289 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:00.289 BaseBdev2 00:20:00.289 BaseBdev3 00:20:00.289 BaseBdev4' 00:20:00.289 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:00.289 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:00.289 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:00.548 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:00.548 "name": "NewBaseBdev", 00:20:00.548 "aliases": [ 00:20:00.548 "cd7d0c8d-cad0-4a00-a187-ad377628b720" 00:20:00.548 ], 00:20:00.548 "product_name": "Malloc disk", 00:20:00.548 "block_size": 512, 00:20:00.548 "num_blocks": 65536, 00:20:00.548 "uuid": "cd7d0c8d-cad0-4a00-a187-ad377628b720", 00:20:00.548 "assigned_rate_limits": { 00:20:00.548 "rw_ios_per_sec": 0, 00:20:00.548 "rw_mbytes_per_sec": 0, 00:20:00.548 "r_mbytes_per_sec": 0, 00:20:00.548 "w_mbytes_per_sec": 0 00:20:00.548 }, 00:20:00.548 "claimed": true, 00:20:00.548 "claim_type": "exclusive_write", 00:20:00.548 "zoned": false, 00:20:00.548 "supported_io_types": { 00:20:00.548 "read": true, 00:20:00.548 "write": true, 00:20:00.548 "unmap": true, 00:20:00.548 "flush": true, 00:20:00.548 "reset": true, 00:20:00.548 "nvme_admin": false, 00:20:00.548 "nvme_io": false, 00:20:00.548 "nvme_io_md": false, 00:20:00.548 "write_zeroes": true, 00:20:00.548 "zcopy": true, 00:20:00.548 "get_zone_info": false, 00:20:00.548 "zone_management": false, 00:20:00.548 "zone_append": false, 00:20:00.548 "compare": false, 00:20:00.548 "compare_and_write": false, 00:20:00.548 "abort": true, 00:20:00.548 "seek_hole": false, 00:20:00.548 "seek_data": false, 00:20:00.548 "copy": true, 00:20:00.548 "nvme_iov_md": false 00:20:00.548 }, 00:20:00.548 "memory_domains": [ 00:20:00.548 { 00:20:00.548 "dma_device_id": "system", 00:20:00.548 "dma_device_type": 1 00:20:00.548 }, 00:20:00.548 { 00:20:00.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.548 "dma_device_type": 2 00:20:00.548 } 00:20:00.548 ], 00:20:00.548 "driver_specific": {} 00:20:00.548 }' 00:20:00.548 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:00.820 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:00.820 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:00.820 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:00.820 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:00.820 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:00.820 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:00.820 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:01.152 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:01.152 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:01.152 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:01.152 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:01.152 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:01.152 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:01.152 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:01.410 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:01.410 "name": "BaseBdev2", 00:20:01.410 "aliases": [ 00:20:01.410 "cce860bf-70ab-4d91-9ebf-4989ae0663d7" 00:20:01.410 ], 00:20:01.410 "product_name": "Malloc disk", 00:20:01.410 "block_size": 512, 00:20:01.410 "num_blocks": 65536, 00:20:01.410 "uuid": "cce860bf-70ab-4d91-9ebf-4989ae0663d7", 00:20:01.410 "assigned_rate_limits": { 00:20:01.410 "rw_ios_per_sec": 0, 00:20:01.410 "rw_mbytes_per_sec": 0, 00:20:01.410 "r_mbytes_per_sec": 0, 00:20:01.410 "w_mbytes_per_sec": 0 00:20:01.410 }, 00:20:01.410 "claimed": true, 00:20:01.410 "claim_type": "exclusive_write", 00:20:01.410 "zoned": false, 00:20:01.410 "supported_io_types": { 00:20:01.410 "read": true, 00:20:01.410 "write": true, 00:20:01.410 "unmap": true, 00:20:01.410 "flush": true, 00:20:01.410 "reset": true, 00:20:01.410 "nvme_admin": false, 00:20:01.410 "nvme_io": false, 00:20:01.410 "nvme_io_md": false, 00:20:01.410 "write_zeroes": true, 00:20:01.410 "zcopy": true, 00:20:01.410 "get_zone_info": false, 00:20:01.410 "zone_management": false, 00:20:01.410 "zone_append": false, 00:20:01.410 "compare": false, 00:20:01.410 "compare_and_write": false, 00:20:01.410 "abort": true, 00:20:01.410 "seek_hole": false, 00:20:01.410 "seek_data": false, 00:20:01.410 "copy": true, 00:20:01.410 "nvme_iov_md": false 00:20:01.410 }, 00:20:01.410 "memory_domains": [ 00:20:01.410 { 00:20:01.410 "dma_device_id": "system", 00:20:01.410 "dma_device_type": 1 00:20:01.410 }, 00:20:01.410 { 00:20:01.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.410 "dma_device_type": 2 00:20:01.410 } 00:20:01.410 ], 00:20:01.410 "driver_specific": {} 00:20:01.410 }' 00:20:01.410 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:01.410 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:01.410 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:01.410 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:01.410 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:01.668 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:01.668 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:01.668 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:01.668 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:01.668 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:01.668 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:01.668 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:01.668 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:01.668 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:01.668 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:02.234 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:02.234 "name": "BaseBdev3", 00:20:02.234 "aliases": [ 00:20:02.234 "85436d92-4117-4aaa-8abc-1ad877aa0f7f" 00:20:02.234 ], 00:20:02.234 "product_name": "Malloc disk", 00:20:02.234 "block_size": 512, 00:20:02.234 "num_blocks": 65536, 00:20:02.234 "uuid": "85436d92-4117-4aaa-8abc-1ad877aa0f7f", 00:20:02.234 "assigned_rate_limits": { 00:20:02.234 "rw_ios_per_sec": 0, 00:20:02.234 "rw_mbytes_per_sec": 0, 00:20:02.234 "r_mbytes_per_sec": 0, 00:20:02.234 "w_mbytes_per_sec": 0 00:20:02.234 }, 00:20:02.234 "claimed": true, 00:20:02.234 "claim_type": "exclusive_write", 00:20:02.234 "zoned": false, 00:20:02.234 "supported_io_types": { 00:20:02.234 "read": true, 00:20:02.235 "write": true, 00:20:02.235 "unmap": true, 00:20:02.235 "flush": true, 00:20:02.235 "reset": true, 00:20:02.235 "nvme_admin": false, 00:20:02.235 "nvme_io": false, 00:20:02.235 "nvme_io_md": false, 00:20:02.235 "write_zeroes": true, 00:20:02.235 "zcopy": true, 00:20:02.235 "get_zone_info": false, 00:20:02.235 "zone_management": false, 00:20:02.235 "zone_append": false, 00:20:02.235 "compare": false, 00:20:02.235 "compare_and_write": false, 00:20:02.235 "abort": true, 00:20:02.235 "seek_hole": false, 00:20:02.235 "seek_data": false, 00:20:02.235 "copy": true, 00:20:02.235 "nvme_iov_md": false 00:20:02.235 }, 00:20:02.235 "memory_domains": [ 00:20:02.235 { 00:20:02.235 "dma_device_id": "system", 00:20:02.235 "dma_device_type": 1 00:20:02.235 }, 00:20:02.235 { 00:20:02.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.235 "dma_device_type": 2 00:20:02.235 } 00:20:02.235 ], 00:20:02.235 "driver_specific": {} 00:20:02.235 }' 00:20:02.235 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.235 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.235 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:02.235 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.235 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.235 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:02.235 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.235 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.494 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.494 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.494 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.494 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.494 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:02.494 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:02.494 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:02.752 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:02.752 "name": "BaseBdev4", 00:20:02.752 "aliases": [ 00:20:02.752 "93fc3d62-9a74-47af-8f67-1ac3331b59ce" 00:20:02.752 ], 00:20:02.752 "product_name": "Malloc disk", 00:20:02.752 "block_size": 512, 00:20:02.752 "num_blocks": 65536, 00:20:02.752 "uuid": "93fc3d62-9a74-47af-8f67-1ac3331b59ce", 00:20:02.752 "assigned_rate_limits": { 00:20:02.752 "rw_ios_per_sec": 0, 00:20:02.752 "rw_mbytes_per_sec": 0, 00:20:02.752 "r_mbytes_per_sec": 0, 00:20:02.752 "w_mbytes_per_sec": 0 00:20:02.752 }, 00:20:02.752 "claimed": true, 00:20:02.752 "claim_type": "exclusive_write", 00:20:02.752 "zoned": false, 00:20:02.752 "supported_io_types": { 00:20:02.752 "read": true, 00:20:02.752 "write": true, 00:20:02.752 "unmap": true, 00:20:02.752 "flush": true, 00:20:02.752 "reset": true, 00:20:02.752 "nvme_admin": false, 00:20:02.752 "nvme_io": false, 00:20:02.752 "nvme_io_md": false, 00:20:02.752 "write_zeroes": true, 00:20:02.752 "zcopy": true, 00:20:02.752 "get_zone_info": false, 00:20:02.752 "zone_management": false, 00:20:02.752 "zone_append": false, 00:20:02.752 "compare": false, 00:20:02.752 "compare_and_write": false, 00:20:02.752 "abort": true, 00:20:02.752 "seek_hole": false, 00:20:02.752 "seek_data": false, 00:20:02.752 "copy": true, 00:20:02.752 "nvme_iov_md": false 00:20:02.752 }, 00:20:02.752 "memory_domains": [ 00:20:02.752 { 00:20:02.752 "dma_device_id": "system", 00:20:02.752 "dma_device_type": 1 00:20:02.752 }, 00:20:02.752 { 00:20:02.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.752 "dma_device_type": 2 00:20:02.752 } 00:20:02.752 ], 00:20:02.752 "driver_specific": {} 00:20:02.752 }' 00:20:02.752 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.752 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.010 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.010 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.010 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.010 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.010 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.010 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.010 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.010 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.268 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.268 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.268 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:03.526 [2024-07-25 11:31:19.174643] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:03.526 [2024-07-25 11:31:19.174882] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.526 [2024-07-25 11:31:19.175123] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.526 [2024-07-25 11:31:19.175508] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.526 [2024-07-25 11:31:19.175527] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 83420 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83420 ']' 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83420 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83420 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83420' 00:20:03.526 killing process with pid 83420 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83420 00:20:03.526 [2024-07-25 11:31:19.217659] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.526 11:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83420 00:20:03.784 [2024-07-25 11:31:19.576932] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:05.159 ************************************ 00:20:05.159 END TEST raid_state_function_test_sb 00:20:05.159 ************************************ 00:20:05.159 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:05.159 00:20:05.159 real 0m37.526s 00:20:05.159 user 1m8.720s 00:20:05.159 sys 0m4.830s 00:20:05.159 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.159 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.159 11:31:20 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:20:05.159 11:31:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:05.159 11:31:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:05.159 11:31:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:05.159 ************************************ 00:20:05.159 START TEST raid_superblock_test 00:20:05.159 ************************************ 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=84502 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 84502 /var/tmp/spdk-raid.sock 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84502 ']' 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:05.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:05.159 11:31:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.159 [2024-07-25 11:31:20.942603] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:20:05.159 [2024-07-25 11:31:20.942987] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84502 ] 00:20:05.417 [2024-07-25 11:31:21.124228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.675 [2024-07-25 11:31:21.427499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.933 [2024-07-25 11:31:21.627729] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.933 [2024-07-25 11:31:21.627771] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:06.191 11:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:06.448 malloc1 00:20:06.448 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:06.707 [2024-07-25 11:31:22.370185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:06.707 [2024-07-25 11:31:22.370542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.707 [2024-07-25 11:31:22.370735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:06.707 [2024-07-25 11:31:22.370876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.707 [2024-07-25 11:31:22.373831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.707 [2024-07-25 11:31:22.374007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:06.707 pt1 00:20:06.707 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:06.707 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:06.707 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:20:06.707 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:20:06.707 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:06.707 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:06.707 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:06.707 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:06.707 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:06.965 malloc2 00:20:06.965 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:07.222 [2024-07-25 11:31:22.921311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:07.222 [2024-07-25 11:31:22.921415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.222 [2024-07-25 11:31:22.921445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:07.222 [2024-07-25 11:31:22.921465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.222 [2024-07-25 11:31:22.924170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.222 [2024-07-25 11:31:22.924232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:07.222 pt2 00:20:07.222 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:07.222 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:07.222 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:20:07.222 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:20:07.222 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:07.222 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:07.223 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:07.223 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:07.223 11:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:07.480 malloc3 00:20:07.480 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:07.737 [2024-07-25 11:31:23.464745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:07.737 [2024-07-25 11:31:23.464844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.738 [2024-07-25 11:31:23.464878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:07.738 [2024-07-25 11:31:23.464896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.738 [2024-07-25 11:31:23.467931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.738 [2024-07-25 11:31:23.467980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:07.738 pt3 00:20:07.738 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:07.738 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:07.738 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:20:07.738 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:20:07.738 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:07.738 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:07.738 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:07.738 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:07.738 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:07.995 malloc4 00:20:07.995 11:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:08.270 [2024-07-25 11:31:24.000039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:08.270 [2024-07-25 11:31:24.000119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.270 [2024-07-25 11:31:24.000149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:08.270 [2024-07-25 11:31:24.000167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.270 [2024-07-25 11:31:24.002867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.270 [2024-07-25 11:31:24.002914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:08.270 pt4 00:20:08.270 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:08.270 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:08.270 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:08.548 [2024-07-25 11:31:24.256152] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:08.548 [2024-07-25 11:31:24.258517] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.548 [2024-07-25 11:31:24.258602] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:08.548 [2024-07-25 11:31:24.258702] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:08.548 [2024-07-25 11:31:24.258977] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:08.548 [2024-07-25 11:31:24.259007] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:08.548 [2024-07-25 11:31:24.259382] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:08.548 [2024-07-25 11:31:24.259634] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:08.548 [2024-07-25 11:31:24.259668] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:08.548 [2024-07-25 11:31:24.259886] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.548 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.806 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.806 "name": "raid_bdev1", 00:20:08.806 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:08.806 "strip_size_kb": 0, 00:20:08.806 "state": "online", 00:20:08.806 "raid_level": "raid1", 00:20:08.806 "superblock": true, 00:20:08.806 "num_base_bdevs": 4, 00:20:08.806 "num_base_bdevs_discovered": 4, 00:20:08.806 "num_base_bdevs_operational": 4, 00:20:08.806 "base_bdevs_list": [ 00:20:08.806 { 00:20:08.806 "name": "pt1", 00:20:08.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.806 "is_configured": true, 00:20:08.806 "data_offset": 2048, 00:20:08.806 "data_size": 63488 00:20:08.806 }, 00:20:08.806 { 00:20:08.806 "name": "pt2", 00:20:08.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.806 "is_configured": true, 00:20:08.806 "data_offset": 2048, 00:20:08.806 "data_size": 63488 00:20:08.806 }, 00:20:08.806 { 00:20:08.806 "name": "pt3", 00:20:08.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:08.806 "is_configured": true, 00:20:08.806 "data_offset": 2048, 00:20:08.806 "data_size": 63488 00:20:08.806 }, 00:20:08.806 { 00:20:08.806 "name": "pt4", 00:20:08.806 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:08.806 "is_configured": true, 00:20:08.806 "data_offset": 2048, 00:20:08.806 "data_size": 63488 00:20:08.806 } 00:20:08.806 ] 00:20:08.806 }' 00:20:08.806 11:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.806 11:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.372 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:20:09.372 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:09.372 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:09.372 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:09.372 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:09.372 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:09.372 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:09.372 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:09.630 [2024-07-25 11:31:25.432855] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.630 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:09.630 "name": "raid_bdev1", 00:20:09.630 "aliases": [ 00:20:09.630 "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4" 00:20:09.630 ], 00:20:09.630 "product_name": "Raid Volume", 00:20:09.630 "block_size": 512, 00:20:09.630 "num_blocks": 63488, 00:20:09.630 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:09.630 "assigned_rate_limits": { 00:20:09.630 "rw_ios_per_sec": 0, 00:20:09.630 "rw_mbytes_per_sec": 0, 00:20:09.630 "r_mbytes_per_sec": 0, 00:20:09.630 "w_mbytes_per_sec": 0 00:20:09.630 }, 00:20:09.630 "claimed": false, 00:20:09.630 "zoned": false, 00:20:09.630 "supported_io_types": { 00:20:09.630 "read": true, 00:20:09.630 "write": true, 00:20:09.630 "unmap": false, 00:20:09.630 "flush": false, 00:20:09.630 "reset": true, 00:20:09.630 "nvme_admin": false, 00:20:09.630 "nvme_io": false, 00:20:09.630 "nvme_io_md": false, 00:20:09.630 "write_zeroes": true, 00:20:09.630 "zcopy": false, 00:20:09.630 "get_zone_info": false, 00:20:09.630 "zone_management": false, 00:20:09.630 "zone_append": false, 00:20:09.630 "compare": false, 00:20:09.630 "compare_and_write": false, 00:20:09.630 "abort": false, 00:20:09.630 "seek_hole": false, 00:20:09.630 "seek_data": false, 00:20:09.630 "copy": false, 00:20:09.630 "nvme_iov_md": false 00:20:09.630 }, 00:20:09.630 "memory_domains": [ 00:20:09.630 { 00:20:09.630 "dma_device_id": "system", 00:20:09.630 "dma_device_type": 1 00:20:09.630 }, 00:20:09.630 { 00:20:09.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.630 "dma_device_type": 2 00:20:09.630 }, 00:20:09.630 { 00:20:09.630 "dma_device_id": "system", 00:20:09.630 "dma_device_type": 1 00:20:09.630 }, 00:20:09.630 { 00:20:09.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.630 "dma_device_type": 2 00:20:09.630 }, 00:20:09.630 { 00:20:09.630 "dma_device_id": "system", 00:20:09.630 "dma_device_type": 1 00:20:09.630 }, 00:20:09.630 { 00:20:09.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.630 "dma_device_type": 2 00:20:09.630 }, 00:20:09.630 { 00:20:09.630 "dma_device_id": "system", 00:20:09.630 "dma_device_type": 1 00:20:09.630 }, 00:20:09.630 { 00:20:09.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.630 "dma_device_type": 2 00:20:09.630 } 00:20:09.630 ], 00:20:09.630 "driver_specific": { 00:20:09.630 "raid": { 00:20:09.630 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:09.630 "strip_size_kb": 0, 00:20:09.630 "state": "online", 00:20:09.630 "raid_level": "raid1", 00:20:09.630 "superblock": true, 00:20:09.630 "num_base_bdevs": 4, 00:20:09.630 "num_base_bdevs_discovered": 4, 00:20:09.630 "num_base_bdevs_operational": 4, 00:20:09.630 "base_bdevs_list": [ 00:20:09.630 { 00:20:09.630 "name": "pt1", 00:20:09.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.630 "is_configured": true, 00:20:09.630 "data_offset": 2048, 00:20:09.630 "data_size": 63488 00:20:09.630 }, 00:20:09.630 { 00:20:09.630 "name": "pt2", 00:20:09.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.630 "is_configured": true, 00:20:09.630 "data_offset": 2048, 00:20:09.630 "data_size": 63488 00:20:09.630 }, 00:20:09.630 { 00:20:09.630 "name": "pt3", 00:20:09.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:09.630 "is_configured": true, 00:20:09.630 "data_offset": 2048, 00:20:09.630 "data_size": 63488 00:20:09.630 }, 00:20:09.630 { 00:20:09.630 "name": "pt4", 00:20:09.630 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:09.630 "is_configured": true, 00:20:09.630 "data_offset": 2048, 00:20:09.630 "data_size": 63488 00:20:09.630 } 00:20:09.630 ] 00:20:09.630 } 00:20:09.630 } 00:20:09.630 }' 00:20:09.630 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:09.630 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:09.630 pt2 00:20:09.630 pt3 00:20:09.630 pt4' 00:20:09.630 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:09.630 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:09.630 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:10.196 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.196 "name": "pt1", 00:20:10.196 "aliases": [ 00:20:10.196 "00000000-0000-0000-0000-000000000001" 00:20:10.196 ], 00:20:10.196 "product_name": "passthru", 00:20:10.196 "block_size": 512, 00:20:10.196 "num_blocks": 65536, 00:20:10.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.196 "assigned_rate_limits": { 00:20:10.196 "rw_ios_per_sec": 0, 00:20:10.196 "rw_mbytes_per_sec": 0, 00:20:10.196 "r_mbytes_per_sec": 0, 00:20:10.196 "w_mbytes_per_sec": 0 00:20:10.196 }, 00:20:10.196 "claimed": true, 00:20:10.196 "claim_type": "exclusive_write", 00:20:10.196 "zoned": false, 00:20:10.196 "supported_io_types": { 00:20:10.196 "read": true, 00:20:10.196 "write": true, 00:20:10.196 "unmap": true, 00:20:10.196 "flush": true, 00:20:10.196 "reset": true, 00:20:10.196 "nvme_admin": false, 00:20:10.196 "nvme_io": false, 00:20:10.196 "nvme_io_md": false, 00:20:10.196 "write_zeroes": true, 00:20:10.196 "zcopy": true, 00:20:10.196 "get_zone_info": false, 00:20:10.196 "zone_management": false, 00:20:10.196 "zone_append": false, 00:20:10.196 "compare": false, 00:20:10.196 "compare_and_write": false, 00:20:10.196 "abort": true, 00:20:10.196 "seek_hole": false, 00:20:10.196 "seek_data": false, 00:20:10.196 "copy": true, 00:20:10.196 "nvme_iov_md": false 00:20:10.196 }, 00:20:10.196 "memory_domains": [ 00:20:10.196 { 00:20:10.196 "dma_device_id": "system", 00:20:10.196 "dma_device_type": 1 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.196 "dma_device_type": 2 00:20:10.196 } 00:20:10.196 ], 00:20:10.196 "driver_specific": { 00:20:10.196 "passthru": { 00:20:10.196 "name": "pt1", 00:20:10.196 "base_bdev_name": "malloc1" 00:20:10.196 } 00:20:10.196 } 00:20:10.196 }' 00:20:10.196 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.196 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.196 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.196 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.196 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.196 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.196 11:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.196 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.196 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:10.196 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.454 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.454 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:10.454 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:10.454 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:10.454 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:10.711 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.711 "name": "pt2", 00:20:10.711 "aliases": [ 00:20:10.711 "00000000-0000-0000-0000-000000000002" 00:20:10.711 ], 00:20:10.711 "product_name": "passthru", 00:20:10.711 "block_size": 512, 00:20:10.711 "num_blocks": 65536, 00:20:10.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.711 "assigned_rate_limits": { 00:20:10.711 "rw_ios_per_sec": 0, 00:20:10.711 "rw_mbytes_per_sec": 0, 00:20:10.711 "r_mbytes_per_sec": 0, 00:20:10.711 "w_mbytes_per_sec": 0 00:20:10.711 }, 00:20:10.711 "claimed": true, 00:20:10.711 "claim_type": "exclusive_write", 00:20:10.711 "zoned": false, 00:20:10.711 "supported_io_types": { 00:20:10.711 "read": true, 00:20:10.711 "write": true, 00:20:10.711 "unmap": true, 00:20:10.711 "flush": true, 00:20:10.711 "reset": true, 00:20:10.711 "nvme_admin": false, 00:20:10.711 "nvme_io": false, 00:20:10.711 "nvme_io_md": false, 00:20:10.711 "write_zeroes": true, 00:20:10.711 "zcopy": true, 00:20:10.711 "get_zone_info": false, 00:20:10.711 "zone_management": false, 00:20:10.711 "zone_append": false, 00:20:10.711 "compare": false, 00:20:10.711 "compare_and_write": false, 00:20:10.711 "abort": true, 00:20:10.711 "seek_hole": false, 00:20:10.711 "seek_data": false, 00:20:10.711 "copy": true, 00:20:10.711 "nvme_iov_md": false 00:20:10.711 }, 00:20:10.711 "memory_domains": [ 00:20:10.711 { 00:20:10.711 "dma_device_id": "system", 00:20:10.711 "dma_device_type": 1 00:20:10.711 }, 00:20:10.711 { 00:20:10.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.711 "dma_device_type": 2 00:20:10.711 } 00:20:10.711 ], 00:20:10.711 "driver_specific": { 00:20:10.711 "passthru": { 00:20:10.711 "name": "pt2", 00:20:10.711 "base_bdev_name": "malloc2" 00:20:10.711 } 00:20:10.711 } 00:20:10.711 }' 00:20:10.711 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.711 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.711 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.711 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.969 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.969 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.969 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.969 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.969 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:10.969 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.969 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.226 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.226 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.226 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:11.226 11:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.482 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.483 "name": "pt3", 00:20:11.483 "aliases": [ 00:20:11.483 "00000000-0000-0000-0000-000000000003" 00:20:11.483 ], 00:20:11.483 "product_name": "passthru", 00:20:11.483 "block_size": 512, 00:20:11.483 "num_blocks": 65536, 00:20:11.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:11.483 "assigned_rate_limits": { 00:20:11.483 "rw_ios_per_sec": 0, 00:20:11.483 "rw_mbytes_per_sec": 0, 00:20:11.483 "r_mbytes_per_sec": 0, 00:20:11.483 "w_mbytes_per_sec": 0 00:20:11.483 }, 00:20:11.483 "claimed": true, 00:20:11.483 "claim_type": "exclusive_write", 00:20:11.483 "zoned": false, 00:20:11.483 "supported_io_types": { 00:20:11.483 "read": true, 00:20:11.483 "write": true, 00:20:11.483 "unmap": true, 00:20:11.483 "flush": true, 00:20:11.483 "reset": true, 00:20:11.483 "nvme_admin": false, 00:20:11.483 "nvme_io": false, 00:20:11.483 "nvme_io_md": false, 00:20:11.483 "write_zeroes": true, 00:20:11.483 "zcopy": true, 00:20:11.483 "get_zone_info": false, 00:20:11.483 "zone_management": false, 00:20:11.483 "zone_append": false, 00:20:11.483 "compare": false, 00:20:11.483 "compare_and_write": false, 00:20:11.483 "abort": true, 00:20:11.483 "seek_hole": false, 00:20:11.483 "seek_data": false, 00:20:11.483 "copy": true, 00:20:11.483 "nvme_iov_md": false 00:20:11.483 }, 00:20:11.483 "memory_domains": [ 00:20:11.483 { 00:20:11.483 "dma_device_id": "system", 00:20:11.483 "dma_device_type": 1 00:20:11.483 }, 00:20:11.483 { 00:20:11.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.483 "dma_device_type": 2 00:20:11.483 } 00:20:11.483 ], 00:20:11.483 "driver_specific": { 00:20:11.483 "passthru": { 00:20:11.483 "name": "pt3", 00:20:11.483 "base_bdev_name": "malloc3" 00:20:11.483 } 00:20:11.483 } 00:20:11.483 }' 00:20:11.483 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.483 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.483 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.483 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.483 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.483 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.483 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.744 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.744 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.744 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.744 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.744 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.744 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.744 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:11.744 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:12.007 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:12.007 "name": "pt4", 00:20:12.007 "aliases": [ 00:20:12.007 "00000000-0000-0000-0000-000000000004" 00:20:12.007 ], 00:20:12.007 "product_name": "passthru", 00:20:12.007 "block_size": 512, 00:20:12.007 "num_blocks": 65536, 00:20:12.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:12.007 "assigned_rate_limits": { 00:20:12.007 "rw_ios_per_sec": 0, 00:20:12.007 "rw_mbytes_per_sec": 0, 00:20:12.007 "r_mbytes_per_sec": 0, 00:20:12.007 "w_mbytes_per_sec": 0 00:20:12.007 }, 00:20:12.007 "claimed": true, 00:20:12.007 "claim_type": "exclusive_write", 00:20:12.007 "zoned": false, 00:20:12.007 "supported_io_types": { 00:20:12.007 "read": true, 00:20:12.007 "write": true, 00:20:12.007 "unmap": true, 00:20:12.007 "flush": true, 00:20:12.007 "reset": true, 00:20:12.007 "nvme_admin": false, 00:20:12.007 "nvme_io": false, 00:20:12.007 "nvme_io_md": false, 00:20:12.007 "write_zeroes": true, 00:20:12.007 "zcopy": true, 00:20:12.007 "get_zone_info": false, 00:20:12.007 "zone_management": false, 00:20:12.007 "zone_append": false, 00:20:12.007 "compare": false, 00:20:12.007 "compare_and_write": false, 00:20:12.007 "abort": true, 00:20:12.007 "seek_hole": false, 00:20:12.007 "seek_data": false, 00:20:12.007 "copy": true, 00:20:12.007 "nvme_iov_md": false 00:20:12.007 }, 00:20:12.007 "memory_domains": [ 00:20:12.007 { 00:20:12.007 "dma_device_id": "system", 00:20:12.007 "dma_device_type": 1 00:20:12.007 }, 00:20:12.007 { 00:20:12.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.007 "dma_device_type": 2 00:20:12.007 } 00:20:12.007 ], 00:20:12.007 "driver_specific": { 00:20:12.007 "passthru": { 00:20:12.007 "name": "pt4", 00:20:12.007 "base_bdev_name": "malloc4" 00:20:12.007 } 00:20:12.007 } 00:20:12.007 }' 00:20:12.007 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.007 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:12.007 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:12.007 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.265 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:12.265 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:12.265 11:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.265 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:12.265 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:12.265 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.524 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:12.524 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:12.524 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:20:12.524 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:12.782 [2024-07-25 11:31:28.477717] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.782 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4 00:20:12.782 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4 ']' 00:20:12.782 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:13.040 [2024-07-25 11:31:28.709372] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.040 [2024-07-25 11:31:28.709416] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.040 [2024-07-25 11:31:28.709515] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.040 [2024-07-25 11:31:28.709652] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.040 [2024-07-25 11:31:28.709670] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:13.040 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.040 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:20:13.299 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:20:13.299 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:20:13.299 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:13.299 11:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:13.558 11:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:13.558 11:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:13.816 11:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:13.816 11:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:13.816 11:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:13.816 11:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:14.073 11:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:14.073 11:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:14.332 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:14.589 [2024-07-25 11:31:30.389748] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:14.589 [2024-07-25 11:31:30.392132] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:14.589 [2024-07-25 11:31:30.392207] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:14.589 [2024-07-25 11:31:30.392256] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:14.589 [2024-07-25 11:31:30.392344] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:14.589 [2024-07-25 11:31:30.392425] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:14.589 [2024-07-25 11:31:30.392463] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:14.589 [2024-07-25 11:31:30.392491] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:14.589 [2024-07-25 11:31:30.392515] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:14.589 [2024-07-25 11:31:30.392529] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:14.589 request: 00:20:14.589 { 00:20:14.589 "name": "raid_bdev1", 00:20:14.589 "raid_level": "raid1", 00:20:14.589 "base_bdevs": [ 00:20:14.589 "malloc1", 00:20:14.589 "malloc2", 00:20:14.589 "malloc3", 00:20:14.589 "malloc4" 00:20:14.589 ], 00:20:14.589 "superblock": false, 00:20:14.589 "method": "bdev_raid_create", 00:20:14.589 "req_id": 1 00:20:14.589 } 00:20:14.589 Got JSON-RPC error response 00:20:14.589 response: 00:20:14.589 { 00:20:14.589 "code": -17, 00:20:14.589 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:14.589 } 00:20:14.589 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:20:14.589 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:14.589 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:14.589 11:31:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:14.589 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.589 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:20:14.847 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:20:14.847 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:20:14.847 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:15.105 [2024-07-25 11:31:30.921829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:15.105 [2024-07-25 11:31:30.921923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.105 [2024-07-25 11:31:30.921963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:15.105 [2024-07-25 11:31:30.921978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.105 [2024-07-25 11:31:30.924815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.105 [2024-07-25 11:31:30.924862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:15.105 [2024-07-25 11:31:30.924983] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:15.105 [2024-07-25 11:31:30.925053] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:15.105 pt1 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.105 11:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.362 11:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.362 "name": "raid_bdev1", 00:20:15.362 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:15.362 "strip_size_kb": 0, 00:20:15.362 "state": "configuring", 00:20:15.362 "raid_level": "raid1", 00:20:15.363 "superblock": true, 00:20:15.363 "num_base_bdevs": 4, 00:20:15.363 "num_base_bdevs_discovered": 1, 00:20:15.363 "num_base_bdevs_operational": 4, 00:20:15.363 "base_bdevs_list": [ 00:20:15.363 { 00:20:15.363 "name": "pt1", 00:20:15.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:15.363 "is_configured": true, 00:20:15.363 "data_offset": 2048, 00:20:15.363 "data_size": 63488 00:20:15.363 }, 00:20:15.363 { 00:20:15.363 "name": null, 00:20:15.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:15.363 "is_configured": false, 00:20:15.363 "data_offset": 2048, 00:20:15.363 "data_size": 63488 00:20:15.363 }, 00:20:15.363 { 00:20:15.363 "name": null, 00:20:15.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:15.363 "is_configured": false, 00:20:15.363 "data_offset": 2048, 00:20:15.363 "data_size": 63488 00:20:15.363 }, 00:20:15.363 { 00:20:15.363 "name": null, 00:20:15.363 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:15.363 "is_configured": false, 00:20:15.363 "data_offset": 2048, 00:20:15.363 "data_size": 63488 00:20:15.363 } 00:20:15.363 ] 00:20:15.363 }' 00:20:15.363 11:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.363 11:31:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.297 11:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:20:16.297 11:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:16.297 [2024-07-25 11:31:32.019269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:16.297 [2024-07-25 11:31:32.019375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.297 [2024-07-25 11:31:32.019408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:16.297 [2024-07-25 11:31:32.019422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.297 [2024-07-25 11:31:32.020041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.297 [2024-07-25 11:31:32.020073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:16.297 [2024-07-25 11:31:32.020188] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:16.297 [2024-07-25 11:31:32.020219] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:16.297 pt2 00:20:16.297 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:16.555 [2024-07-25 11:31:32.295381] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.555 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.814 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.814 "name": "raid_bdev1", 00:20:16.814 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:16.814 "strip_size_kb": 0, 00:20:16.814 "state": "configuring", 00:20:16.814 "raid_level": "raid1", 00:20:16.814 "superblock": true, 00:20:16.814 "num_base_bdevs": 4, 00:20:16.814 "num_base_bdevs_discovered": 1, 00:20:16.814 "num_base_bdevs_operational": 4, 00:20:16.814 "base_bdevs_list": [ 00:20:16.814 { 00:20:16.814 "name": "pt1", 00:20:16.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:16.814 "is_configured": true, 00:20:16.814 "data_offset": 2048, 00:20:16.814 "data_size": 63488 00:20:16.814 }, 00:20:16.814 { 00:20:16.814 "name": null, 00:20:16.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:16.814 "is_configured": false, 00:20:16.814 "data_offset": 2048, 00:20:16.814 "data_size": 63488 00:20:16.814 }, 00:20:16.814 { 00:20:16.814 "name": null, 00:20:16.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:16.814 "is_configured": false, 00:20:16.814 "data_offset": 2048, 00:20:16.814 "data_size": 63488 00:20:16.814 }, 00:20:16.814 { 00:20:16.814 "name": null, 00:20:16.814 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:16.814 "is_configured": false, 00:20:16.814 "data_offset": 2048, 00:20:16.814 "data_size": 63488 00:20:16.814 } 00:20:16.814 ] 00:20:16.814 }' 00:20:16.814 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.814 11:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.748 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:20:17.748 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:17.748 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:17.748 [2024-07-25 11:31:33.487676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:17.748 [2024-07-25 11:31:33.487778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.748 [2024-07-25 11:31:33.487814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:17.748 [2024-07-25 11:31:33.487835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.748 [2024-07-25 11:31:33.488390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.748 [2024-07-25 11:31:33.488422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:17.748 [2024-07-25 11:31:33.488530] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:17.748 [2024-07-25 11:31:33.488567] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:17.748 pt2 00:20:17.748 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:17.748 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:17.748 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:18.007 [2024-07-25 11:31:33.759778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:18.007 [2024-07-25 11:31:33.759915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.007 [2024-07-25 11:31:33.759945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:18.007 [2024-07-25 11:31:33.759977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.007 [2024-07-25 11:31:33.760509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.007 [2024-07-25 11:31:33.760538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:18.007 [2024-07-25 11:31:33.760693] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:18.007 [2024-07-25 11:31:33.760732] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:18.007 pt3 00:20:18.007 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:18.007 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:18.007 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:18.266 [2024-07-25 11:31:34.023863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:18.266 [2024-07-25 11:31:34.023967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.266 [2024-07-25 11:31:34.023996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:18.266 [2024-07-25 11:31:34.024014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.266 [2024-07-25 11:31:34.024573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.266 [2024-07-25 11:31:34.024607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:18.266 [2024-07-25 11:31:34.024750] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:18.266 [2024-07-25 11:31:34.024791] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:18.266 [2024-07-25 11:31:34.024981] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:18.266 [2024-07-25 11:31:34.025001] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:18.266 [2024-07-25 11:31:34.025313] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:18.266 [2024-07-25 11:31:34.025516] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:18.266 [2024-07-25 11:31:34.025532] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:18.266 [2024-07-25 11:31:34.025722] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.266 pt4 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.266 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.524 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.524 "name": "raid_bdev1", 00:20:18.524 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:18.524 "strip_size_kb": 0, 00:20:18.524 "state": "online", 00:20:18.524 "raid_level": "raid1", 00:20:18.524 "superblock": true, 00:20:18.524 "num_base_bdevs": 4, 00:20:18.524 "num_base_bdevs_discovered": 4, 00:20:18.524 "num_base_bdevs_operational": 4, 00:20:18.524 "base_bdevs_list": [ 00:20:18.524 { 00:20:18.524 "name": "pt1", 00:20:18.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:18.525 "is_configured": true, 00:20:18.525 "data_offset": 2048, 00:20:18.525 "data_size": 63488 00:20:18.525 }, 00:20:18.525 { 00:20:18.525 "name": "pt2", 00:20:18.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:18.525 "is_configured": true, 00:20:18.525 "data_offset": 2048, 00:20:18.525 "data_size": 63488 00:20:18.525 }, 00:20:18.525 { 00:20:18.525 "name": "pt3", 00:20:18.525 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:18.525 "is_configured": true, 00:20:18.525 "data_offset": 2048, 00:20:18.525 "data_size": 63488 00:20:18.525 }, 00:20:18.525 { 00:20:18.525 "name": "pt4", 00:20:18.525 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:18.525 "is_configured": true, 00:20:18.525 "data_offset": 2048, 00:20:18.525 "data_size": 63488 00:20:18.525 } 00:20:18.525 ] 00:20:18.525 }' 00:20:18.525 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.525 11:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.090 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:20:19.090 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:19.090 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:19.090 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:19.090 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:19.090 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:19.091 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:19.091 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:19.349 [2024-07-25 11:31:35.184676] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:19.349 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:19.349 "name": "raid_bdev1", 00:20:19.349 "aliases": [ 00:20:19.349 "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4" 00:20:19.349 ], 00:20:19.349 "product_name": "Raid Volume", 00:20:19.349 "block_size": 512, 00:20:19.349 "num_blocks": 63488, 00:20:19.349 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:19.349 "assigned_rate_limits": { 00:20:19.349 "rw_ios_per_sec": 0, 00:20:19.349 "rw_mbytes_per_sec": 0, 00:20:19.349 "r_mbytes_per_sec": 0, 00:20:19.349 "w_mbytes_per_sec": 0 00:20:19.349 }, 00:20:19.349 "claimed": false, 00:20:19.349 "zoned": false, 00:20:19.349 "supported_io_types": { 00:20:19.349 "read": true, 00:20:19.349 "write": true, 00:20:19.349 "unmap": false, 00:20:19.349 "flush": false, 00:20:19.349 "reset": true, 00:20:19.349 "nvme_admin": false, 00:20:19.349 "nvme_io": false, 00:20:19.349 "nvme_io_md": false, 00:20:19.349 "write_zeroes": true, 00:20:19.349 "zcopy": false, 00:20:19.349 "get_zone_info": false, 00:20:19.349 "zone_management": false, 00:20:19.349 "zone_append": false, 00:20:19.349 "compare": false, 00:20:19.349 "compare_and_write": false, 00:20:19.349 "abort": false, 00:20:19.349 "seek_hole": false, 00:20:19.349 "seek_data": false, 00:20:19.349 "copy": false, 00:20:19.349 "nvme_iov_md": false 00:20:19.349 }, 00:20:19.349 "memory_domains": [ 00:20:19.349 { 00:20:19.349 "dma_device_id": "system", 00:20:19.349 "dma_device_type": 1 00:20:19.349 }, 00:20:19.349 { 00:20:19.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.349 "dma_device_type": 2 00:20:19.349 }, 00:20:19.349 { 00:20:19.349 "dma_device_id": "system", 00:20:19.349 "dma_device_type": 1 00:20:19.349 }, 00:20:19.350 { 00:20:19.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.350 "dma_device_type": 2 00:20:19.350 }, 00:20:19.350 { 00:20:19.350 "dma_device_id": "system", 00:20:19.350 "dma_device_type": 1 00:20:19.350 }, 00:20:19.350 { 00:20:19.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.350 "dma_device_type": 2 00:20:19.350 }, 00:20:19.350 { 00:20:19.350 "dma_device_id": "system", 00:20:19.350 "dma_device_type": 1 00:20:19.350 }, 00:20:19.350 { 00:20:19.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.350 "dma_device_type": 2 00:20:19.350 } 00:20:19.350 ], 00:20:19.350 "driver_specific": { 00:20:19.350 "raid": { 00:20:19.350 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:19.350 "strip_size_kb": 0, 00:20:19.350 "state": "online", 00:20:19.350 "raid_level": "raid1", 00:20:19.350 "superblock": true, 00:20:19.350 "num_base_bdevs": 4, 00:20:19.350 "num_base_bdevs_discovered": 4, 00:20:19.350 "num_base_bdevs_operational": 4, 00:20:19.350 "base_bdevs_list": [ 00:20:19.350 { 00:20:19.350 "name": "pt1", 00:20:19.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:19.350 "is_configured": true, 00:20:19.350 "data_offset": 2048, 00:20:19.350 "data_size": 63488 00:20:19.350 }, 00:20:19.350 { 00:20:19.350 "name": "pt2", 00:20:19.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:19.350 "is_configured": true, 00:20:19.350 "data_offset": 2048, 00:20:19.350 "data_size": 63488 00:20:19.350 }, 00:20:19.350 { 00:20:19.350 "name": "pt3", 00:20:19.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:19.350 "is_configured": true, 00:20:19.350 "data_offset": 2048, 00:20:19.350 "data_size": 63488 00:20:19.350 }, 00:20:19.350 { 00:20:19.350 "name": "pt4", 00:20:19.350 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:19.350 "is_configured": true, 00:20:19.350 "data_offset": 2048, 00:20:19.350 "data_size": 63488 00:20:19.350 } 00:20:19.350 ] 00:20:19.350 } 00:20:19.350 } 00:20:19.350 }' 00:20:19.350 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:19.608 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:19.608 pt2 00:20:19.608 pt3 00:20:19.608 pt4' 00:20:19.608 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:19.608 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:19.608 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:19.867 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:19.867 "name": "pt1", 00:20:19.867 "aliases": [ 00:20:19.867 "00000000-0000-0000-0000-000000000001" 00:20:19.867 ], 00:20:19.867 "product_name": "passthru", 00:20:19.867 "block_size": 512, 00:20:19.867 "num_blocks": 65536, 00:20:19.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:19.867 "assigned_rate_limits": { 00:20:19.867 "rw_ios_per_sec": 0, 00:20:19.867 "rw_mbytes_per_sec": 0, 00:20:19.867 "r_mbytes_per_sec": 0, 00:20:19.867 "w_mbytes_per_sec": 0 00:20:19.867 }, 00:20:19.867 "claimed": true, 00:20:19.867 "claim_type": "exclusive_write", 00:20:19.867 "zoned": false, 00:20:19.867 "supported_io_types": { 00:20:19.867 "read": true, 00:20:19.867 "write": true, 00:20:19.867 "unmap": true, 00:20:19.867 "flush": true, 00:20:19.867 "reset": true, 00:20:19.867 "nvme_admin": false, 00:20:19.867 "nvme_io": false, 00:20:19.867 "nvme_io_md": false, 00:20:19.867 "write_zeroes": true, 00:20:19.867 "zcopy": true, 00:20:19.867 "get_zone_info": false, 00:20:19.867 "zone_management": false, 00:20:19.867 "zone_append": false, 00:20:19.867 "compare": false, 00:20:19.867 "compare_and_write": false, 00:20:19.867 "abort": true, 00:20:19.867 "seek_hole": false, 00:20:19.867 "seek_data": false, 00:20:19.867 "copy": true, 00:20:19.867 "nvme_iov_md": false 00:20:19.867 }, 00:20:19.867 "memory_domains": [ 00:20:19.867 { 00:20:19.867 "dma_device_id": "system", 00:20:19.867 "dma_device_type": 1 00:20:19.867 }, 00:20:19.867 { 00:20:19.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.867 "dma_device_type": 2 00:20:19.867 } 00:20:19.867 ], 00:20:19.867 "driver_specific": { 00:20:19.867 "passthru": { 00:20:19.867 "name": "pt1", 00:20:19.867 "base_bdev_name": "malloc1" 00:20:19.867 } 00:20:19.867 } 00:20:19.867 }' 00:20:19.867 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:19.867 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:19.867 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:19.867 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.867 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.867 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:19.867 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:20.125 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:20.125 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:20.125 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:20.125 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:20.125 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:20.125 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:20.125 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:20.125 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:20.383 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:20.383 "name": "pt2", 00:20:20.383 "aliases": [ 00:20:20.383 "00000000-0000-0000-0000-000000000002" 00:20:20.383 ], 00:20:20.383 "product_name": "passthru", 00:20:20.383 "block_size": 512, 00:20:20.383 "num_blocks": 65536, 00:20:20.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:20.383 "assigned_rate_limits": { 00:20:20.383 "rw_ios_per_sec": 0, 00:20:20.383 "rw_mbytes_per_sec": 0, 00:20:20.383 "r_mbytes_per_sec": 0, 00:20:20.383 "w_mbytes_per_sec": 0 00:20:20.383 }, 00:20:20.383 "claimed": true, 00:20:20.383 "claim_type": "exclusive_write", 00:20:20.383 "zoned": false, 00:20:20.383 "supported_io_types": { 00:20:20.383 "read": true, 00:20:20.383 "write": true, 00:20:20.383 "unmap": true, 00:20:20.383 "flush": true, 00:20:20.383 "reset": true, 00:20:20.383 "nvme_admin": false, 00:20:20.383 "nvme_io": false, 00:20:20.383 "nvme_io_md": false, 00:20:20.383 "write_zeroes": true, 00:20:20.383 "zcopy": true, 00:20:20.383 "get_zone_info": false, 00:20:20.383 "zone_management": false, 00:20:20.383 "zone_append": false, 00:20:20.383 "compare": false, 00:20:20.383 "compare_and_write": false, 00:20:20.383 "abort": true, 00:20:20.383 "seek_hole": false, 00:20:20.383 "seek_data": false, 00:20:20.383 "copy": true, 00:20:20.383 "nvme_iov_md": false 00:20:20.383 }, 00:20:20.383 "memory_domains": [ 00:20:20.383 { 00:20:20.383 "dma_device_id": "system", 00:20:20.383 "dma_device_type": 1 00:20:20.383 }, 00:20:20.383 { 00:20:20.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.383 "dma_device_type": 2 00:20:20.383 } 00:20:20.383 ], 00:20:20.383 "driver_specific": { 00:20:20.383 "passthru": { 00:20:20.383 "name": "pt2", 00:20:20.383 "base_bdev_name": "malloc2" 00:20:20.383 } 00:20:20.383 } 00:20:20.383 }' 00:20:20.383 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:20.383 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:20.641 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:20.641 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:20.641 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:20.641 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:20.641 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:20.641 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:20.641 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:20.641 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:20.641 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:20.902 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:20.902 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:20.902 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:20.902 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:20.902 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:20.902 "name": "pt3", 00:20:20.902 "aliases": [ 00:20:20.902 "00000000-0000-0000-0000-000000000003" 00:20:20.902 ], 00:20:20.902 "product_name": "passthru", 00:20:20.902 "block_size": 512, 00:20:20.902 "num_blocks": 65536, 00:20:20.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:20.902 "assigned_rate_limits": { 00:20:20.902 "rw_ios_per_sec": 0, 00:20:20.902 "rw_mbytes_per_sec": 0, 00:20:20.902 "r_mbytes_per_sec": 0, 00:20:20.902 "w_mbytes_per_sec": 0 00:20:20.902 }, 00:20:20.902 "claimed": true, 00:20:20.902 "claim_type": "exclusive_write", 00:20:20.902 "zoned": false, 00:20:20.902 "supported_io_types": { 00:20:20.902 "read": true, 00:20:20.902 "write": true, 00:20:20.902 "unmap": true, 00:20:20.902 "flush": true, 00:20:20.902 "reset": true, 00:20:20.902 "nvme_admin": false, 00:20:20.902 "nvme_io": false, 00:20:20.902 "nvme_io_md": false, 00:20:20.902 "write_zeroes": true, 00:20:20.902 "zcopy": true, 00:20:20.902 "get_zone_info": false, 00:20:20.902 "zone_management": false, 00:20:20.902 "zone_append": false, 00:20:20.902 "compare": false, 00:20:20.902 "compare_and_write": false, 00:20:20.902 "abort": true, 00:20:20.902 "seek_hole": false, 00:20:20.902 "seek_data": false, 00:20:20.902 "copy": true, 00:20:20.902 "nvme_iov_md": false 00:20:20.902 }, 00:20:20.902 "memory_domains": [ 00:20:20.902 { 00:20:20.902 "dma_device_id": "system", 00:20:20.902 "dma_device_type": 1 00:20:20.902 }, 00:20:20.902 { 00:20:20.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.902 "dma_device_type": 2 00:20:20.902 } 00:20:20.902 ], 00:20:20.902 "driver_specific": { 00:20:20.902 "passthru": { 00:20:20.902 "name": "pt3", 00:20:20.902 "base_bdev_name": "malloc3" 00:20:20.902 } 00:20:20.902 } 00:20:20.902 }' 00:20:20.902 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.164 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.164 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:21.164 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.164 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.164 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:21.164 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.164 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.441 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:21.441 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.441 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.441 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:21.441 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:21.441 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:21.441 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:21.706 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:21.706 "name": "pt4", 00:20:21.706 "aliases": [ 00:20:21.706 "00000000-0000-0000-0000-000000000004" 00:20:21.706 ], 00:20:21.706 "product_name": "passthru", 00:20:21.706 "block_size": 512, 00:20:21.706 "num_blocks": 65536, 00:20:21.706 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:21.706 "assigned_rate_limits": { 00:20:21.706 "rw_ios_per_sec": 0, 00:20:21.706 "rw_mbytes_per_sec": 0, 00:20:21.706 "r_mbytes_per_sec": 0, 00:20:21.706 "w_mbytes_per_sec": 0 00:20:21.706 }, 00:20:21.706 "claimed": true, 00:20:21.706 "claim_type": "exclusive_write", 00:20:21.706 "zoned": false, 00:20:21.706 "supported_io_types": { 00:20:21.706 "read": true, 00:20:21.706 "write": true, 00:20:21.706 "unmap": true, 00:20:21.706 "flush": true, 00:20:21.706 "reset": true, 00:20:21.706 "nvme_admin": false, 00:20:21.706 "nvme_io": false, 00:20:21.706 "nvme_io_md": false, 00:20:21.706 "write_zeroes": true, 00:20:21.706 "zcopy": true, 00:20:21.706 "get_zone_info": false, 00:20:21.706 "zone_management": false, 00:20:21.706 "zone_append": false, 00:20:21.706 "compare": false, 00:20:21.706 "compare_and_write": false, 00:20:21.706 "abort": true, 00:20:21.706 "seek_hole": false, 00:20:21.706 "seek_data": false, 00:20:21.706 "copy": true, 00:20:21.706 "nvme_iov_md": false 00:20:21.706 }, 00:20:21.706 "memory_domains": [ 00:20:21.706 { 00:20:21.706 "dma_device_id": "system", 00:20:21.706 "dma_device_type": 1 00:20:21.706 }, 00:20:21.706 { 00:20:21.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.706 "dma_device_type": 2 00:20:21.706 } 00:20:21.706 ], 00:20:21.706 "driver_specific": { 00:20:21.706 "passthru": { 00:20:21.706 "name": "pt4", 00:20:21.706 "base_bdev_name": "malloc4" 00:20:21.706 } 00:20:21.706 } 00:20:21.706 }' 00:20:21.706 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.706 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.706 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:21.706 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.975 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.975 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:21.975 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.975 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.975 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:21.975 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.975 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.246 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:22.246 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:22.246 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:20:22.519 [2024-07-25 11:31:38.149406] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.519 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4 '!=' 8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4 ']' 00:20:22.519 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:20:22.519 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:22.519 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:22.519 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:22.809 [2024-07-25 11:31:38.417147] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.809 "name": "raid_bdev1", 00:20:22.809 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:22.809 "strip_size_kb": 0, 00:20:22.809 "state": "online", 00:20:22.809 "raid_level": "raid1", 00:20:22.809 "superblock": true, 00:20:22.809 "num_base_bdevs": 4, 00:20:22.809 "num_base_bdevs_discovered": 3, 00:20:22.809 "num_base_bdevs_operational": 3, 00:20:22.809 "base_bdevs_list": [ 00:20:22.809 { 00:20:22.809 "name": null, 00:20:22.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.809 "is_configured": false, 00:20:22.809 "data_offset": 2048, 00:20:22.809 "data_size": 63488 00:20:22.809 }, 00:20:22.809 { 00:20:22.809 "name": "pt2", 00:20:22.809 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:22.809 "is_configured": true, 00:20:22.809 "data_offset": 2048, 00:20:22.809 "data_size": 63488 00:20:22.809 }, 00:20:22.809 { 00:20:22.809 "name": "pt3", 00:20:22.809 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:22.809 "is_configured": true, 00:20:22.809 "data_offset": 2048, 00:20:22.809 "data_size": 63488 00:20:22.809 }, 00:20:22.809 { 00:20:22.809 "name": "pt4", 00:20:22.809 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:22.809 "is_configured": true, 00:20:22.809 "data_offset": 2048, 00:20:22.809 "data_size": 63488 00:20:22.809 } 00:20:22.809 ] 00:20:22.809 }' 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.809 11:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.748 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:23.748 [2024-07-25 11:31:39.545448] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.748 [2024-07-25 11:31:39.545490] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.748 [2024-07-25 11:31:39.545589] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.748 [2024-07-25 11:31:39.545704] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.748 [2024-07-25 11:31:39.545729] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:23.748 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.748 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:20:24.007 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:20:24.007 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:20:24.007 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:24.007 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:20:24.007 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:24.265 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:24.265 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:20:24.265 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:24.524 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:24.524 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:20:24.524 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:24.782 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:24.782 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:20:24.782 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:20:24.782 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:20:24.782 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:25.040 [2024-07-25 11:31:40.741765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:25.040 [2024-07-25 11:31:40.741877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.040 [2024-07-25 11:31:40.741908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:25.040 [2024-07-25 11:31:40.741927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.040 [2024-07-25 11:31:40.745006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.040 [2024-07-25 11:31:40.745221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:25.040 [2024-07-25 11:31:40.745454] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:25.040 [2024-07-25 11:31:40.745670] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.040 pt2 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.040 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.298 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.298 "name": "raid_bdev1", 00:20:25.298 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:25.298 "strip_size_kb": 0, 00:20:25.298 "state": "configuring", 00:20:25.298 "raid_level": "raid1", 00:20:25.298 "superblock": true, 00:20:25.298 "num_base_bdevs": 4, 00:20:25.298 "num_base_bdevs_discovered": 1, 00:20:25.298 "num_base_bdevs_operational": 3, 00:20:25.298 "base_bdevs_list": [ 00:20:25.298 { 00:20:25.298 "name": null, 00:20:25.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.298 "is_configured": false, 00:20:25.298 "data_offset": 2048, 00:20:25.298 "data_size": 63488 00:20:25.298 }, 00:20:25.298 { 00:20:25.298 "name": "pt2", 00:20:25.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.298 "is_configured": true, 00:20:25.298 "data_offset": 2048, 00:20:25.298 "data_size": 63488 00:20:25.298 }, 00:20:25.298 { 00:20:25.298 "name": null, 00:20:25.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.298 "is_configured": false, 00:20:25.298 "data_offset": 2048, 00:20:25.298 "data_size": 63488 00:20:25.298 }, 00:20:25.298 { 00:20:25.298 "name": null, 00:20:25.298 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:25.298 "is_configured": false, 00:20:25.298 "data_offset": 2048, 00:20:25.298 "data_size": 63488 00:20:25.298 } 00:20:25.298 ] 00:20:25.298 }' 00:20:25.298 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.298 11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.865 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:20:25.865 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:20:25.865 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:26.121 [2024-07-25 11:31:41.922340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:26.121 [2024-07-25 11:31:41.922447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.121 [2024-07-25 11:31:41.922494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:26.121 [2024-07-25 11:31:41.922511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.121 [2024-07-25 11:31:41.923293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.121 [2024-07-25 11:31:41.923444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:26.121 [2024-07-25 11:31:41.923672] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:26.121 [2024-07-25 11:31:41.923723] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:26.121 pt3 00:20:26.121 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:26.121 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:26.121 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:26.121 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:26.121 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:26.121 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:26.121 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.121 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.121 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.122 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.122 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.122 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.379 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.379 "name": "raid_bdev1", 00:20:26.379 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:26.379 "strip_size_kb": 0, 00:20:26.379 "state": "configuring", 00:20:26.379 "raid_level": "raid1", 00:20:26.379 "superblock": true, 00:20:26.379 "num_base_bdevs": 4, 00:20:26.379 "num_base_bdevs_discovered": 2, 00:20:26.379 "num_base_bdevs_operational": 3, 00:20:26.379 "base_bdevs_list": [ 00:20:26.379 { 00:20:26.379 "name": null, 00:20:26.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.379 "is_configured": false, 00:20:26.379 "data_offset": 2048, 00:20:26.379 "data_size": 63488 00:20:26.379 }, 00:20:26.379 { 00:20:26.379 "name": "pt2", 00:20:26.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:26.379 "is_configured": true, 00:20:26.379 "data_offset": 2048, 00:20:26.379 "data_size": 63488 00:20:26.379 }, 00:20:26.379 { 00:20:26.379 "name": "pt3", 00:20:26.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:26.379 "is_configured": true, 00:20:26.379 "data_offset": 2048, 00:20:26.379 "data_size": 63488 00:20:26.379 }, 00:20:26.379 { 00:20:26.379 "name": null, 00:20:26.379 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:26.379 "is_configured": false, 00:20:26.379 "data_offset": 2048, 00:20:26.379 "data_size": 63488 00:20:26.379 } 00:20:26.379 ] 00:20:26.379 }' 00:20:26.379 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.379 11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.944 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:20:26.944 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:20:26.944 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:20:26.944 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:27.223 [2024-07-25 11:31:43.086696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:27.224 [2024-07-25 11:31:43.086813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.224 [2024-07-25 11:31:43.086852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:27.224 [2024-07-25 11:31:43.086870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.224 [2024-07-25 11:31:43.087480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.224 [2024-07-25 11:31:43.087511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:27.224 [2024-07-25 11:31:43.087636] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:27.224 [2024-07-25 11:31:43.087672] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:27.224 [2024-07-25 11:31:43.087866] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:27.224 [2024-07-25 11:31:43.087888] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:27.224 [2024-07-25 11:31:43.088254] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:27.224 [2024-07-25 11:31:43.088523] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:27.224 [2024-07-25 11:31:43.088540] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:27.224 [2024-07-25 11:31:43.088755] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.224 pt4 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.483 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.744 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.744 "name": "raid_bdev1", 00:20:27.744 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:27.744 "strip_size_kb": 0, 00:20:27.744 "state": "online", 00:20:27.744 "raid_level": "raid1", 00:20:27.744 "superblock": true, 00:20:27.744 "num_base_bdevs": 4, 00:20:27.744 "num_base_bdevs_discovered": 3, 00:20:27.744 "num_base_bdevs_operational": 3, 00:20:27.744 "base_bdevs_list": [ 00:20:27.744 { 00:20:27.744 "name": null, 00:20:27.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.744 "is_configured": false, 00:20:27.744 "data_offset": 2048, 00:20:27.744 "data_size": 63488 00:20:27.744 }, 00:20:27.744 { 00:20:27.744 "name": "pt2", 00:20:27.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:27.744 "is_configured": true, 00:20:27.744 "data_offset": 2048, 00:20:27.744 "data_size": 63488 00:20:27.744 }, 00:20:27.744 { 00:20:27.744 "name": "pt3", 00:20:27.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:27.744 "is_configured": true, 00:20:27.744 "data_offset": 2048, 00:20:27.744 "data_size": 63488 00:20:27.744 }, 00:20:27.744 { 00:20:27.744 "name": "pt4", 00:20:27.744 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:27.744 "is_configured": true, 00:20:27.744 "data_offset": 2048, 00:20:27.744 "data_size": 63488 00:20:27.744 } 00:20:27.744 ] 00:20:27.744 }' 00:20:27.744 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.744 11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.322 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:28.582 [2024-07-25 11:31:44.214987] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:28.582 [2024-07-25 11:31:44.215055] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.582 [2024-07-25 11:31:44.215159] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.582 [2024-07-25 11:31:44.215251] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.582 [2024-07-25 11:31:44.215265] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:28.582 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.582 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:20:28.840 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:20:28.840 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:20:28.840 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:20:28.840 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:20:28.840 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:29.098 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:29.098 [2024-07-25 11:31:44.979167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:29.098 [2024-07-25 11:31:44.979287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.098 [2024-07-25 11:31:44.979322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:29.098 [2024-07-25 11:31:44.979337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.356 [2024-07-25 11:31:44.982256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.356 [2024-07-25 11:31:44.982298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:29.356 [2024-07-25 11:31:44.982436] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:29.356 [2024-07-25 11:31:44.982493] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:29.356 [2024-07-25 11:31:44.982688] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:29.356 [2024-07-25 11:31:44.982706] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:29.356 [2024-07-25 11:31:44.982733] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:29.356 [2024-07-25 11:31:44.982794] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:29.356 [2024-07-25 11:31:44.982951] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:29.356 pt1 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.356 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.356 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.356 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.615 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.615 "name": "raid_bdev1", 00:20:29.615 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:29.615 "strip_size_kb": 0, 00:20:29.615 "state": "configuring", 00:20:29.615 "raid_level": "raid1", 00:20:29.615 "superblock": true, 00:20:29.615 "num_base_bdevs": 4, 00:20:29.615 "num_base_bdevs_discovered": 2, 00:20:29.615 "num_base_bdevs_operational": 3, 00:20:29.615 "base_bdevs_list": [ 00:20:29.615 { 00:20:29.615 "name": null, 00:20:29.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.615 "is_configured": false, 00:20:29.615 "data_offset": 2048, 00:20:29.615 "data_size": 63488 00:20:29.615 }, 00:20:29.615 { 00:20:29.615 "name": "pt2", 00:20:29.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:29.615 "is_configured": true, 00:20:29.615 "data_offset": 2048, 00:20:29.615 "data_size": 63488 00:20:29.615 }, 00:20:29.615 { 00:20:29.615 "name": "pt3", 00:20:29.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:29.615 "is_configured": true, 00:20:29.615 "data_offset": 2048, 00:20:29.615 "data_size": 63488 00:20:29.615 }, 00:20:29.615 { 00:20:29.615 "name": null, 00:20:29.615 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:29.615 "is_configured": false, 00:20:29.615 "data_offset": 2048, 00:20:29.615 "data_size": 63488 00:20:29.615 } 00:20:29.615 ] 00:20:29.615 }' 00:20:29.615 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.615 11:31:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.180 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:20:30.180 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:30.438 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:20:30.438 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:30.696 [2024-07-25 11:31:46.379600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:30.696 [2024-07-25 11:31:46.379974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.696 [2024-07-25 11:31:46.380064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:30.696 [2024-07-25 11:31:46.380315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.696 [2024-07-25 11:31:46.380912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.696 [2024-07-25 11:31:46.380945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:30.696 [2024-07-25 11:31:46.381063] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:30.696 [2024-07-25 11:31:46.381100] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:30.696 [2024-07-25 11:31:46.381275] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:30.696 [2024-07-25 11:31:46.381296] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:30.696 [2024-07-25 11:31:46.381624] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:30.696 [2024-07-25 11:31:46.381979] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:30.696 [2024-07-25 11:31:46.382113] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:30.696 [2024-07-25 11:31:46.382418] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.696 pt4 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.696 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.955 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.955 "name": "raid_bdev1", 00:20:30.955 "uuid": "8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4", 00:20:30.955 "strip_size_kb": 0, 00:20:30.955 "state": "online", 00:20:30.955 "raid_level": "raid1", 00:20:30.955 "superblock": true, 00:20:30.955 "num_base_bdevs": 4, 00:20:30.955 "num_base_bdevs_discovered": 3, 00:20:30.955 "num_base_bdevs_operational": 3, 00:20:30.955 "base_bdevs_list": [ 00:20:30.955 { 00:20:30.955 "name": null, 00:20:30.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.955 "is_configured": false, 00:20:30.955 "data_offset": 2048, 00:20:30.955 "data_size": 63488 00:20:30.955 }, 00:20:30.955 { 00:20:30.955 "name": "pt2", 00:20:30.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.955 "is_configured": true, 00:20:30.955 "data_offset": 2048, 00:20:30.955 "data_size": 63488 00:20:30.955 }, 00:20:30.955 { 00:20:30.955 "name": "pt3", 00:20:30.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:30.955 "is_configured": true, 00:20:30.955 "data_offset": 2048, 00:20:30.955 "data_size": 63488 00:20:30.955 }, 00:20:30.955 { 00:20:30.955 "name": "pt4", 00:20:30.955 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:30.955 "is_configured": true, 00:20:30.955 "data_offset": 2048, 00:20:30.955 "data_size": 63488 00:20:30.955 } 00:20:30.955 ] 00:20:30.955 }' 00:20:30.955 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.955 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.521 11:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:31.521 11:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:20:31.780 11:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:20:31.780 11:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:31.780 11:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:20:32.038 [2024-07-25 11:31:47.797598] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4 '!=' 8a9607e5-0e08-42ef-b52a-b2c3a6f0e6f4 ']' 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 84502 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84502 ']' 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84502 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84502 00:20:32.038 killing process with pid 84502 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84502' 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 84502 00:20:32.038 [2024-07-25 11:31:47.848427] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.038 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 84502 00:20:32.038 [2024-07-25 11:31:47.848518] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.038 [2024-07-25 11:31:47.848684] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.038 [2024-07-25 11:31:47.848702] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:32.605 [2024-07-25 11:31:48.197526] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:33.542 11:31:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:20:33.542 00:20:33.542 real 0m28.472s 00:20:33.542 user 0m52.235s 00:20:33.542 sys 0m3.627s 00:20:33.542 11:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.542 11:31:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.542 ************************************ 00:20:33.542 END TEST raid_superblock_test 00:20:33.542 ************************************ 00:20:33.542 11:31:49 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:20:33.542 11:31:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:33.542 11:31:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:33.542 11:31:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.542 ************************************ 00:20:33.542 START TEST raid_read_error_test 00:20:33.542 ************************************ 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.OYxQzAqNlV 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=85342 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 85342 /var/tmp/spdk-raid.sock 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85342 ']' 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:33.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.542 11:31:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.801 [2024-07-25 11:31:49.489542] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:20:33.801 [2024-07-25 11:31:49.489754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85342 ] 00:20:33.801 [2024-07-25 11:31:49.668182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.368 [2024-07-25 11:31:49.944085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.368 [2024-07-25 11:31:50.159059] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.368 [2024-07-25 11:31:50.159131] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.626 11:31:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:34.626 11:31:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:34.626 11:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:34.626 11:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:34.885 BaseBdev1_malloc 00:20:34.885 11:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:35.143 true 00:20:35.143 11:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:35.402 [2024-07-25 11:31:51.184684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:35.402 [2024-07-25 11:31:51.184767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.402 [2024-07-25 11:31:51.184805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:35.402 [2024-07-25 11:31:51.184821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.402 [2024-07-25 11:31:51.187661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.402 [2024-07-25 11:31:51.187720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:35.402 BaseBdev1 00:20:35.402 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:35.402 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:35.662 BaseBdev2_malloc 00:20:35.662 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:36.228 true 00:20:36.228 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:36.228 [2024-07-25 11:31:52.084651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:36.228 [2024-07-25 11:31:52.084729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.228 [2024-07-25 11:31:52.084769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:36.228 [2024-07-25 11:31:52.084785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.228 [2024-07-25 11:31:52.087691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.228 [2024-07-25 11:31:52.087751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:36.228 BaseBdev2 00:20:36.228 11:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:36.228 11:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:36.486 BaseBdev3_malloc 00:20:36.745 11:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:36.745 true 00:20:36.745 11:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:37.004 [2024-07-25 11:31:52.795605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:37.004 [2024-07-25 11:31:52.795713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.004 [2024-07-25 11:31:52.795765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:37.004 [2024-07-25 11:31:52.795781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.004 [2024-07-25 11:31:52.798466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.004 [2024-07-25 11:31:52.798507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:37.004 BaseBdev3 00:20:37.004 11:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:37.004 11:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:37.264 BaseBdev4_malloc 00:20:37.264 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:37.523 true 00:20:37.523 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:37.781 [2024-07-25 11:31:53.539889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:37.781 [2024-07-25 11:31:53.539974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.781 [2024-07-25 11:31:53.540023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:37.781 [2024-07-25 11:31:53.540038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.781 [2024-07-25 11:31:53.542836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.781 [2024-07-25 11:31:53.542892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:37.781 BaseBdev4 00:20:37.781 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:38.039 [2024-07-25 11:31:53.780073] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:38.039 [2024-07-25 11:31:53.782529] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:38.039 [2024-07-25 11:31:53.782637] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:38.039 [2024-07-25 11:31:53.782740] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:38.039 [2024-07-25 11:31:53.783103] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:20:38.039 [2024-07-25 11:31:53.783122] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:38.039 [2024-07-25 11:31:53.783532] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:38.039 [2024-07-25 11:31:53.783827] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:20:38.039 [2024-07-25 11:31:53.783849] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:20:38.039 [2024-07-25 11:31:53.784138] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.039 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.298 11:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.298 "name": "raid_bdev1", 00:20:38.298 "uuid": "1d7edbc0-bb25-4bec-af78-12c9cfa6432c", 00:20:38.298 "strip_size_kb": 0, 00:20:38.298 "state": "online", 00:20:38.298 "raid_level": "raid1", 00:20:38.298 "superblock": true, 00:20:38.298 "num_base_bdevs": 4, 00:20:38.298 "num_base_bdevs_discovered": 4, 00:20:38.298 "num_base_bdevs_operational": 4, 00:20:38.298 "base_bdevs_list": [ 00:20:38.298 { 00:20:38.298 "name": "BaseBdev1", 00:20:38.298 "uuid": "cb72f96f-2318-5ca3-b33e-59a7611c81a8", 00:20:38.298 "is_configured": true, 00:20:38.298 "data_offset": 2048, 00:20:38.298 "data_size": 63488 00:20:38.298 }, 00:20:38.298 { 00:20:38.298 "name": "BaseBdev2", 00:20:38.298 "uuid": "94dbdf62-79ef-5885-ab60-1cd5ea7a4189", 00:20:38.298 "is_configured": true, 00:20:38.298 "data_offset": 2048, 00:20:38.298 "data_size": 63488 00:20:38.298 }, 00:20:38.298 { 00:20:38.298 "name": "BaseBdev3", 00:20:38.298 "uuid": "5dd8aea6-5bf8-5528-be8b-1f1c83b7a691", 00:20:38.298 "is_configured": true, 00:20:38.298 "data_offset": 2048, 00:20:38.298 "data_size": 63488 00:20:38.298 }, 00:20:38.298 { 00:20:38.298 "name": "BaseBdev4", 00:20:38.298 "uuid": "b9d830a3-00be-5b26-a0ea-972ab2554709", 00:20:38.298 "is_configured": true, 00:20:38.298 "data_offset": 2048, 00:20:38.298 "data_size": 63488 00:20:38.298 } 00:20:38.298 ] 00:20:38.298 }' 00:20:38.298 11:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.298 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.895 11:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:20:38.895 11:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:39.153 [2024-07-25 11:31:54.801894] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:40.087 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.345 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.603 11:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.603 "name": "raid_bdev1", 00:20:40.603 "uuid": "1d7edbc0-bb25-4bec-af78-12c9cfa6432c", 00:20:40.603 "strip_size_kb": 0, 00:20:40.603 "state": "online", 00:20:40.603 "raid_level": "raid1", 00:20:40.603 "superblock": true, 00:20:40.603 "num_base_bdevs": 4, 00:20:40.603 "num_base_bdevs_discovered": 4, 00:20:40.603 "num_base_bdevs_operational": 4, 00:20:40.603 "base_bdevs_list": [ 00:20:40.603 { 00:20:40.603 "name": "BaseBdev1", 00:20:40.603 "uuid": "cb72f96f-2318-5ca3-b33e-59a7611c81a8", 00:20:40.603 "is_configured": true, 00:20:40.603 "data_offset": 2048, 00:20:40.603 "data_size": 63488 00:20:40.603 }, 00:20:40.603 { 00:20:40.603 "name": "BaseBdev2", 00:20:40.604 "uuid": "94dbdf62-79ef-5885-ab60-1cd5ea7a4189", 00:20:40.604 "is_configured": true, 00:20:40.604 "data_offset": 2048, 00:20:40.604 "data_size": 63488 00:20:40.604 }, 00:20:40.604 { 00:20:40.604 "name": "BaseBdev3", 00:20:40.604 "uuid": "5dd8aea6-5bf8-5528-be8b-1f1c83b7a691", 00:20:40.604 "is_configured": true, 00:20:40.604 "data_offset": 2048, 00:20:40.604 "data_size": 63488 00:20:40.604 }, 00:20:40.604 { 00:20:40.604 "name": "BaseBdev4", 00:20:40.604 "uuid": "b9d830a3-00be-5b26-a0ea-972ab2554709", 00:20:40.604 "is_configured": true, 00:20:40.604 "data_offset": 2048, 00:20:40.604 "data_size": 63488 00:20:40.604 } 00:20:40.604 ] 00:20:40.604 }' 00:20:40.604 11:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.604 11:31:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.170 11:31:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:41.428 [2024-07-25 11:31:57.184490] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:41.428 [2024-07-25 11:31:57.184536] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:41.428 [2024-07-25 11:31:57.188081] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.428 [2024-07-25 11:31:57.188140] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.428 [2024-07-25 11:31:57.188312] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.428 [2024-07-25 11:31:57.188327] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:20:41.428 0 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 85342 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85342 ']' 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85342 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85342 00:20:41.428 killing process with pid 85342 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85342' 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85342 00:20:41.428 [2024-07-25 11:31:57.227195] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:41.428 11:31:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85342 00:20:41.686 [2024-07-25 11:31:57.532339] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:43.061 11:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.OYxQzAqNlV 00:20:43.061 11:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:43.061 11:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:43.061 ************************************ 00:20:43.061 END TEST raid_read_error_test 00:20:43.061 ************************************ 00:20:43.061 11:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:20:43.061 11:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:20:43.061 11:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:43.061 11:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:43.061 11:31:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:43.061 00:20:43.061 real 0m9.405s 00:20:43.061 user 0m14.395s 00:20:43.061 sys 0m1.208s 00:20:43.061 11:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:43.061 11:31:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.061 11:31:58 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:20:43.061 11:31:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:43.061 11:31:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.061 11:31:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:43.061 ************************************ 00:20:43.061 START TEST raid_write_error_test 00:20:43.061 ************************************ 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.yu01XVQ2He 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=85553 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 85553 /var/tmp/spdk-raid.sock 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85553 ']' 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:43.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.061 11:31:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.318 [2024-07-25 11:31:58.961116] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:20:43.318 [2024-07-25 11:31:58.961306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85553 ] 00:20:43.318 [2024-07-25 11:31:59.139424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.576 [2024-07-25 11:31:59.429660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.833 [2024-07-25 11:31:59.638959] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.833 [2024-07-25 11:31:59.639023] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.091 11:31:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.091 11:31:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:44.091 11:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:44.091 11:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:44.349 BaseBdev1_malloc 00:20:44.607 11:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:44.864 true 00:20:44.864 11:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:45.122 [2024-07-25 11:32:00.764529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:45.122 [2024-07-25 11:32:00.764671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.122 [2024-07-25 11:32:00.764713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:45.122 [2024-07-25 11:32:00.764730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.122 [2024-07-25 11:32:00.767649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.122 [2024-07-25 11:32:00.767721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:45.122 BaseBdev1 00:20:45.122 11:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:45.122 11:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:45.380 BaseBdev2_malloc 00:20:45.380 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:45.638 true 00:20:45.638 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:45.896 [2024-07-25 11:32:01.618587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:45.896 [2024-07-25 11:32:01.618714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.896 [2024-07-25 11:32:01.618755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:45.896 [2024-07-25 11:32:01.618788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.896 [2024-07-25 11:32:01.621818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.896 [2024-07-25 11:32:01.621864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:45.896 BaseBdev2 00:20:45.896 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:45.896 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:46.153 BaseBdev3_malloc 00:20:46.153 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:46.411 true 00:20:46.411 11:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:46.669 [2024-07-25 11:32:02.370279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:46.669 [2024-07-25 11:32:02.370386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.669 [2024-07-25 11:32:02.370429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:46.669 [2024-07-25 11:32:02.370445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.669 [2024-07-25 11:32:02.373579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.669 [2024-07-25 11:32:02.373806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:46.669 BaseBdev3 00:20:46.669 11:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:46.669 11:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:46.928 BaseBdev4_malloc 00:20:46.928 11:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:47.186 true 00:20:47.186 11:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:47.444 [2024-07-25 11:32:03.174890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:47.444 [2024-07-25 11:32:03.174978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.444 [2024-07-25 11:32:03.175018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:47.444 [2024-07-25 11:32:03.175035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.444 [2024-07-25 11:32:03.177919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.444 [2024-07-25 11:32:03.177965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:47.444 BaseBdev4 00:20:47.444 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:47.702 [2024-07-25 11:32:03.419029] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.702 [2024-07-25 11:32:03.421542] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.702 [2024-07-25 11:32:03.421683] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:47.702 [2024-07-25 11:32:03.421776] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:47.702 [2024-07-25 11:32:03.422112] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:20:47.702 [2024-07-25 11:32:03.422131] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:47.702 [2024-07-25 11:32:03.422503] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:47.702 [2024-07-25 11:32:03.422798] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:20:47.702 [2024-07-25 11:32:03.422821] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:20:47.702 [2024-07-25 11:32:03.423073] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.702 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.959 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.959 "name": "raid_bdev1", 00:20:47.959 "uuid": "41ed5c99-8381-4885-8e41-481adaec4785", 00:20:47.959 "strip_size_kb": 0, 00:20:47.959 "state": "online", 00:20:47.959 "raid_level": "raid1", 00:20:47.959 "superblock": true, 00:20:47.959 "num_base_bdevs": 4, 00:20:47.959 "num_base_bdevs_discovered": 4, 00:20:47.959 "num_base_bdevs_operational": 4, 00:20:47.959 "base_bdevs_list": [ 00:20:47.959 { 00:20:47.959 "name": "BaseBdev1", 00:20:47.959 "uuid": "6273dd33-f2e4-5ea8-9eda-c3200ed3f70b", 00:20:47.959 "is_configured": true, 00:20:47.959 "data_offset": 2048, 00:20:47.959 "data_size": 63488 00:20:47.959 }, 00:20:47.959 { 00:20:47.959 "name": "BaseBdev2", 00:20:47.959 "uuid": "b56dfa4e-8dc1-5045-8f92-cacebcb0d23a", 00:20:47.959 "is_configured": true, 00:20:47.959 "data_offset": 2048, 00:20:47.959 "data_size": 63488 00:20:47.959 }, 00:20:47.959 { 00:20:47.959 "name": "BaseBdev3", 00:20:47.959 "uuid": "5b010543-b172-57f9-8fa5-b3a1a294c63f", 00:20:47.959 "is_configured": true, 00:20:47.959 "data_offset": 2048, 00:20:47.960 "data_size": 63488 00:20:47.960 }, 00:20:47.960 { 00:20:47.960 "name": "BaseBdev4", 00:20:47.960 "uuid": "c2fd43eb-1889-50c6-b59c-12d4e2312494", 00:20:47.960 "is_configured": true, 00:20:47.960 "data_offset": 2048, 00:20:47.960 "data_size": 63488 00:20:47.960 } 00:20:47.960 ] 00:20:47.960 }' 00:20:47.960 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.960 11:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.525 11:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:20:48.525 11:32:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:48.783 [2024-07-25 11:32:04.440717] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:49.773 [2024-07-25 11:32:05.602998] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:20:49.773 [2024-07-25 11:32:05.603304] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.773 [2024-07-25 11:32:05.603737] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=3 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.773 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.031 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.031 "name": "raid_bdev1", 00:20:50.031 "uuid": "41ed5c99-8381-4885-8e41-481adaec4785", 00:20:50.031 "strip_size_kb": 0, 00:20:50.031 "state": "online", 00:20:50.031 "raid_level": "raid1", 00:20:50.031 "superblock": true, 00:20:50.031 "num_base_bdevs": 4, 00:20:50.031 "num_base_bdevs_discovered": 3, 00:20:50.031 "num_base_bdevs_operational": 3, 00:20:50.031 "base_bdevs_list": [ 00:20:50.031 { 00:20:50.031 "name": null, 00:20:50.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.031 "is_configured": false, 00:20:50.031 "data_offset": 2048, 00:20:50.031 "data_size": 63488 00:20:50.031 }, 00:20:50.031 { 00:20:50.031 "name": "BaseBdev2", 00:20:50.031 "uuid": "b56dfa4e-8dc1-5045-8f92-cacebcb0d23a", 00:20:50.031 "is_configured": true, 00:20:50.031 "data_offset": 2048, 00:20:50.031 "data_size": 63488 00:20:50.031 }, 00:20:50.031 { 00:20:50.031 "name": "BaseBdev3", 00:20:50.031 "uuid": "5b010543-b172-57f9-8fa5-b3a1a294c63f", 00:20:50.031 "is_configured": true, 00:20:50.031 "data_offset": 2048, 00:20:50.031 "data_size": 63488 00:20:50.031 }, 00:20:50.031 { 00:20:50.031 "name": "BaseBdev4", 00:20:50.031 "uuid": "c2fd43eb-1889-50c6-b59c-12d4e2312494", 00:20:50.031 "is_configured": true, 00:20:50.031 "data_offset": 2048, 00:20:50.031 "data_size": 63488 00:20:50.031 } 00:20:50.031 ] 00:20:50.031 }' 00:20:50.031 11:32:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.031 11:32:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.964 11:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:50.964 [2024-07-25 11:32:06.805293] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:50.964 [2024-07-25 11:32:06.805346] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:50.964 0 00:20:50.964 [2024-07-25 11:32:06.808453] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:50.964 [2024-07-25 11:32:06.808522] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.964 [2024-07-25 11:32:06.808690] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:50.964 [2024-07-25 11:32:06.808713] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:20:50.964 11:32:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 85553 00:20:50.964 11:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85553 ']' 00:20:50.964 11:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85553 00:20:50.964 11:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:20:50.964 11:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:50.964 11:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85553 00:20:51.222 killing process with pid 85553 00:20:51.222 11:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:51.223 11:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:51.223 11:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85553' 00:20:51.223 11:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85553 00:20:51.223 [2024-07-25 11:32:06.855063] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:51.223 11:32:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85553 00:20:51.481 [2024-07-25 11:32:07.154227] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:52.854 11:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.yu01XVQ2He 00:20:52.854 11:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:52.854 11:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:52.854 11:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:20:52.854 11:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:20:52.854 11:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:52.854 11:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:52.854 11:32:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:52.854 00:20:52.854 real 0m9.537s 00:20:52.854 user 0m14.693s 00:20:52.854 sys 0m1.151s 00:20:52.854 11:32:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:52.854 ************************************ 00:20:52.854 END TEST raid_write_error_test 00:20:52.854 ************************************ 00:20:52.854 11:32:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.854 11:32:08 bdev_raid -- bdev/bdev_raid.sh@955 -- # '[' true = true ']' 00:20:52.854 11:32:08 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:20:52.854 11:32:08 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:20:52.854 11:32:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:52.854 11:32:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:52.854 11:32:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:52.854 ************************************ 00:20:52.854 START TEST raid_rebuild_test 00:20:52.854 ************************************ 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:20:52.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=85755 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 85755 /var/tmp/spdk-raid.sock 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85755 ']' 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:52.854 11:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.854 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:52.854 Zero copy mechanism will not be used. 00:20:52.854 [2024-07-25 11:32:08.535231] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:20:52.854 [2024-07-25 11:32:08.535410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85755 ] 00:20:52.854 [2024-07-25 11:32:08.711415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.112 [2024-07-25 11:32:08.973096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.371 [2024-07-25 11:32:09.175571] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.371 [2024-07-25 11:32:09.175672] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.630 11:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:53.630 11:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:20:53.630 11:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:20:53.630 11:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:53.889 BaseBdev1_malloc 00:20:53.889 11:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:54.147 [2024-07-25 11:32:09.964067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:54.147 [2024-07-25 11:32:09.964151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.147 [2024-07-25 11:32:09.964191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:54.147 [2024-07-25 11:32:09.964209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.147 [2024-07-25 11:32:09.967014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.147 [2024-07-25 11:32:09.967060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:54.147 BaseBdev1 00:20:54.147 11:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:20:54.147 11:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:54.405 BaseBdev2_malloc 00:20:54.405 11:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:54.662 [2024-07-25 11:32:10.467462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:54.662 [2024-07-25 11:32:10.467748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:54.662 [2024-07-25 11:32:10.468102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:54.662 [2024-07-25 11:32:10.468376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:54.662 [2024-07-25 11:32:10.473830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:54.662 [2024-07-25 11:32:10.474139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:54.662 BaseBdev2 00:20:54.662 11:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:54.920 spare_malloc 00:20:55.179 11:32:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:55.437 spare_delay 00:20:55.437 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:55.695 [2024-07-25 11:32:11.372024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:55.695 [2024-07-25 11:32:11.372127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.695 [2024-07-25 11:32:11.372170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:55.695 [2024-07-25 11:32:11.372186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.695 [2024-07-25 11:32:11.375066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.695 [2024-07-25 11:32:11.375112] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:55.695 spare 00:20:55.695 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:55.953 [2024-07-25 11:32:11.612167] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.953 [2024-07-25 11:32:11.614705] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.953 [2024-07-25 11:32:11.614864] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:55.953 [2024-07-25 11:32:11.614882] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:55.953 [2024-07-25 11:32:11.615317] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:55.953 [2024-07-25 11:32:11.615560] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:55.953 [2024-07-25 11:32:11.615584] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:55.953 [2024-07-25 11:32:11.615813] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.953 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.213 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.213 "name": "raid_bdev1", 00:20:56.213 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:20:56.213 "strip_size_kb": 0, 00:20:56.213 "state": "online", 00:20:56.213 "raid_level": "raid1", 00:20:56.213 "superblock": false, 00:20:56.213 "num_base_bdevs": 2, 00:20:56.213 "num_base_bdevs_discovered": 2, 00:20:56.213 "num_base_bdevs_operational": 2, 00:20:56.213 "base_bdevs_list": [ 00:20:56.213 { 00:20:56.213 "name": "BaseBdev1", 00:20:56.213 "uuid": "1909c4a6-d2fb-5b0a-b99c-932cbc9ec29d", 00:20:56.213 "is_configured": true, 00:20:56.213 "data_offset": 0, 00:20:56.213 "data_size": 65536 00:20:56.213 }, 00:20:56.213 { 00:20:56.213 "name": "BaseBdev2", 00:20:56.213 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:20:56.213 "is_configured": true, 00:20:56.213 "data_offset": 0, 00:20:56.213 "data_size": 65536 00:20:56.213 } 00:20:56.213 ] 00:20:56.213 }' 00:20:56.213 11:32:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.213 11:32:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.781 11:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:56.781 11:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:20:57.040 [2024-07-25 11:32:12.716815] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.040 11:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:20:57.040 11:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.040 11:32:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:57.298 11:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:20:57.298 11:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:20:57.298 11:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:20:57.298 11:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:20:57.298 11:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:57.298 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:57.298 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:57.298 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:57.299 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:57.299 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:57.299 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:57.299 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:57.299 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.299 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:57.557 [2024-07-25 11:32:13.240678] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:57.557 /dev/nbd0 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:57.557 1+0 records in 00:20:57.557 1+0 records out 00:20:57.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670268 s, 6.1 MB/s 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:20:57.557 11:32:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:04.114 65536+0 records in 00:21:04.114 65536+0 records out 00:21:04.114 33554432 bytes (34 MB, 32 MiB) copied, 6.66638 s, 5.0 MB/s 00:21:04.114 11:32:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:04.115 11:32:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:04.115 11:32:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:04.115 11:32:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:04.115 11:32:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:04.115 11:32:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:04.115 11:32:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:04.373 [2024-07-25 11:32:20.231945] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.373 11:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:04.630 11:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:04.630 11:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:04.630 11:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:04.631 [2024-07-25 11:32:20.493009] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.631 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.889 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.889 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.147 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.147 "name": "raid_bdev1", 00:21:05.147 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:21:05.147 "strip_size_kb": 0, 00:21:05.147 "state": "online", 00:21:05.147 "raid_level": "raid1", 00:21:05.147 "superblock": false, 00:21:05.147 "num_base_bdevs": 2, 00:21:05.147 "num_base_bdevs_discovered": 1, 00:21:05.147 "num_base_bdevs_operational": 1, 00:21:05.147 "base_bdevs_list": [ 00:21:05.147 { 00:21:05.148 "name": null, 00:21:05.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.148 "is_configured": false, 00:21:05.148 "data_offset": 0, 00:21:05.148 "data_size": 65536 00:21:05.148 }, 00:21:05.148 { 00:21:05.148 "name": "BaseBdev2", 00:21:05.148 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:21:05.148 "is_configured": true, 00:21:05.148 "data_offset": 0, 00:21:05.148 "data_size": 65536 00:21:05.148 } 00:21:05.148 ] 00:21:05.148 }' 00:21:05.148 11:32:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.148 11:32:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.714 11:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:05.973 [2024-07-25 11:32:21.637344] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.973 [2024-07-25 11:32:21.652834] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:21:05.973 [2024-07-25 11:32:21.655226] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:05.973 11:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:06.909 11:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.909 11:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:06.909 11:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:06.909 11:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:06.909 11:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:06.909 11:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.909 11:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.167 11:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:07.167 "name": "raid_bdev1", 00:21:07.167 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:21:07.167 "strip_size_kb": 0, 00:21:07.167 "state": "online", 00:21:07.167 "raid_level": "raid1", 00:21:07.167 "superblock": false, 00:21:07.167 "num_base_bdevs": 2, 00:21:07.167 "num_base_bdevs_discovered": 2, 00:21:07.167 "num_base_bdevs_operational": 2, 00:21:07.167 "process": { 00:21:07.167 "type": "rebuild", 00:21:07.167 "target": "spare", 00:21:07.167 "progress": { 00:21:07.167 "blocks": 24576, 00:21:07.167 "percent": 37 00:21:07.167 } 00:21:07.167 }, 00:21:07.167 "base_bdevs_list": [ 00:21:07.167 { 00:21:07.167 "name": "spare", 00:21:07.167 "uuid": "72ad9c5b-19df-561d-be1f-86e9ded34365", 00:21:07.167 "is_configured": true, 00:21:07.167 "data_offset": 0, 00:21:07.167 "data_size": 65536 00:21:07.167 }, 00:21:07.167 { 00:21:07.167 "name": "BaseBdev2", 00:21:07.167 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:21:07.167 "is_configured": true, 00:21:07.167 "data_offset": 0, 00:21:07.167 "data_size": 65536 00:21:07.167 } 00:21:07.167 ] 00:21:07.167 }' 00:21:07.167 11:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:07.167 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.167 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:07.426 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.426 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:07.685 [2024-07-25 11:32:23.316768] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.685 [2024-07-25 11:32:23.367988] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:07.685 [2024-07-25 11:32:23.368074] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.685 [2024-07-25 11:32:23.368101] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.685 [2024-07-25 11:32:23.368114] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.685 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.943 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.943 "name": "raid_bdev1", 00:21:07.943 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:21:07.943 "strip_size_kb": 0, 00:21:07.943 "state": "online", 00:21:07.943 "raid_level": "raid1", 00:21:07.943 "superblock": false, 00:21:07.943 "num_base_bdevs": 2, 00:21:07.943 "num_base_bdevs_discovered": 1, 00:21:07.943 "num_base_bdevs_operational": 1, 00:21:07.943 "base_bdevs_list": [ 00:21:07.943 { 00:21:07.943 "name": null, 00:21:07.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.943 "is_configured": false, 00:21:07.943 "data_offset": 0, 00:21:07.943 "data_size": 65536 00:21:07.943 }, 00:21:07.943 { 00:21:07.943 "name": "BaseBdev2", 00:21:07.943 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:21:07.943 "is_configured": true, 00:21:07.943 "data_offset": 0, 00:21:07.943 "data_size": 65536 00:21:07.943 } 00:21:07.943 ] 00:21:07.943 }' 00:21:07.943 11:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.943 11:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.510 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:08.510 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:08.510 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:08.510 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:08.510 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:08.510 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.510 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.767 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:08.767 "name": "raid_bdev1", 00:21:08.767 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:21:08.767 "strip_size_kb": 0, 00:21:08.767 "state": "online", 00:21:08.767 "raid_level": "raid1", 00:21:08.767 "superblock": false, 00:21:08.767 "num_base_bdevs": 2, 00:21:08.767 "num_base_bdevs_discovered": 1, 00:21:08.767 "num_base_bdevs_operational": 1, 00:21:08.767 "base_bdevs_list": [ 00:21:08.767 { 00:21:08.767 "name": null, 00:21:08.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.767 "is_configured": false, 00:21:08.767 "data_offset": 0, 00:21:08.767 "data_size": 65536 00:21:08.767 }, 00:21:08.767 { 00:21:08.767 "name": "BaseBdev2", 00:21:08.767 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:21:08.767 "is_configured": true, 00:21:08.767 "data_offset": 0, 00:21:08.767 "data_size": 65536 00:21:08.767 } 00:21:08.767 ] 00:21:08.767 }' 00:21:08.767 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:09.025 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:09.025 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:09.025 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:09.025 11:32:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:09.283 [2024-07-25 11:32:24.982217] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:09.283 [2024-07-25 11:32:24.996899] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:21:09.283 [2024-07-25 11:32:24.999246] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:09.283 11:32:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:21:10.216 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.216 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:10.217 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:10.217 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:10.217 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:10.217 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.217 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.474 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:10.474 "name": "raid_bdev1", 00:21:10.475 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:21:10.475 "strip_size_kb": 0, 00:21:10.475 "state": "online", 00:21:10.475 "raid_level": "raid1", 00:21:10.475 "superblock": false, 00:21:10.475 "num_base_bdevs": 2, 00:21:10.475 "num_base_bdevs_discovered": 2, 00:21:10.475 "num_base_bdevs_operational": 2, 00:21:10.475 "process": { 00:21:10.475 "type": "rebuild", 00:21:10.475 "target": "spare", 00:21:10.475 "progress": { 00:21:10.475 "blocks": 24576, 00:21:10.475 "percent": 37 00:21:10.475 } 00:21:10.475 }, 00:21:10.475 "base_bdevs_list": [ 00:21:10.475 { 00:21:10.475 "name": "spare", 00:21:10.475 "uuid": "72ad9c5b-19df-561d-be1f-86e9ded34365", 00:21:10.475 "is_configured": true, 00:21:10.475 "data_offset": 0, 00:21:10.475 "data_size": 65536 00:21:10.475 }, 00:21:10.475 { 00:21:10.475 "name": "BaseBdev2", 00:21:10.475 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:21:10.475 "is_configured": true, 00:21:10.475 "data_offset": 0, 00:21:10.475 "data_size": 65536 00:21:10.475 } 00:21:10.475 ] 00:21:10.475 }' 00:21:10.475 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:10.475 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.475 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=910 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.732 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.990 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:10.990 "name": "raid_bdev1", 00:21:10.990 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:21:10.990 "strip_size_kb": 0, 00:21:10.990 "state": "online", 00:21:10.990 "raid_level": "raid1", 00:21:10.990 "superblock": false, 00:21:10.990 "num_base_bdevs": 2, 00:21:10.990 "num_base_bdevs_discovered": 2, 00:21:10.990 "num_base_bdevs_operational": 2, 00:21:10.990 "process": { 00:21:10.990 "type": "rebuild", 00:21:10.990 "target": "spare", 00:21:10.990 "progress": { 00:21:10.990 "blocks": 32768, 00:21:10.990 "percent": 50 00:21:10.990 } 00:21:10.990 }, 00:21:10.990 "base_bdevs_list": [ 00:21:10.990 { 00:21:10.990 "name": "spare", 00:21:10.990 "uuid": "72ad9c5b-19df-561d-be1f-86e9ded34365", 00:21:10.990 "is_configured": true, 00:21:10.990 "data_offset": 0, 00:21:10.990 "data_size": 65536 00:21:10.990 }, 00:21:10.990 { 00:21:10.990 "name": "BaseBdev2", 00:21:10.990 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:21:10.990 "is_configured": true, 00:21:10.990 "data_offset": 0, 00:21:10.990 "data_size": 65536 00:21:10.990 } 00:21:10.990 ] 00:21:10.990 }' 00:21:10.990 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:10.990 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.990 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:10.990 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.990 11:32:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:11.923 11:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:11.923 11:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.923 11:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:11.923 11:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:11.923 11:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:11.923 11:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:11.923 11:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.923 11:32:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.182 11:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:12.182 "name": "raid_bdev1", 00:21:12.182 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:21:12.182 "strip_size_kb": 0, 00:21:12.182 "state": "online", 00:21:12.182 "raid_level": "raid1", 00:21:12.182 "superblock": false, 00:21:12.182 "num_base_bdevs": 2, 00:21:12.182 "num_base_bdevs_discovered": 2, 00:21:12.182 "num_base_bdevs_operational": 2, 00:21:12.182 "process": { 00:21:12.182 "type": "rebuild", 00:21:12.182 "target": "spare", 00:21:12.182 "progress": { 00:21:12.182 "blocks": 61440, 00:21:12.182 "percent": 93 00:21:12.182 } 00:21:12.182 }, 00:21:12.182 "base_bdevs_list": [ 00:21:12.182 { 00:21:12.182 "name": "spare", 00:21:12.182 "uuid": "72ad9c5b-19df-561d-be1f-86e9ded34365", 00:21:12.182 "is_configured": true, 00:21:12.182 "data_offset": 0, 00:21:12.182 "data_size": 65536 00:21:12.182 }, 00:21:12.182 { 00:21:12.182 "name": "BaseBdev2", 00:21:12.182 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:21:12.182 "is_configured": true, 00:21:12.182 "data_offset": 0, 00:21:12.182 "data_size": 65536 00:21:12.182 } 00:21:12.182 ] 00:21:12.182 }' 00:21:12.440 11:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:12.440 11:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.440 11:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:12.440 11:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.440 11:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:12.440 [2024-07-25 11:32:28.223466] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:12.440 [2024-07-25 11:32:28.223562] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:12.440 [2024-07-25 11:32:28.223684] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.376 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:13.376 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.376 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:13.376 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:13.376 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:13.376 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:13.376 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.376 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.634 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:13.634 "name": "raid_bdev1", 00:21:13.634 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:21:13.634 "strip_size_kb": 0, 00:21:13.634 "state": "online", 00:21:13.634 "raid_level": "raid1", 00:21:13.634 "superblock": false, 00:21:13.634 "num_base_bdevs": 2, 00:21:13.634 "num_base_bdevs_discovered": 2, 00:21:13.634 "num_base_bdevs_operational": 2, 00:21:13.634 "base_bdevs_list": [ 00:21:13.634 { 00:21:13.634 "name": "spare", 00:21:13.634 "uuid": "72ad9c5b-19df-561d-be1f-86e9ded34365", 00:21:13.634 "is_configured": true, 00:21:13.634 "data_offset": 0, 00:21:13.634 "data_size": 65536 00:21:13.634 }, 00:21:13.634 { 00:21:13.634 "name": "BaseBdev2", 00:21:13.634 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:21:13.634 "is_configured": true, 00:21:13.634 "data_offset": 0, 00:21:13.634 "data_size": 65536 00:21:13.634 } 00:21:13.634 ] 00:21:13.634 }' 00:21:13.634 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:13.634 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:13.634 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:13.892 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:13.892 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:21:13.892 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.892 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:13.892 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:13.892 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:13.892 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:13.892 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.892 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:14.183 "name": "raid_bdev1", 00:21:14.183 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:21:14.183 "strip_size_kb": 0, 00:21:14.183 "state": "online", 00:21:14.183 "raid_level": "raid1", 00:21:14.183 "superblock": false, 00:21:14.183 "num_base_bdevs": 2, 00:21:14.183 "num_base_bdevs_discovered": 2, 00:21:14.183 "num_base_bdevs_operational": 2, 00:21:14.183 "base_bdevs_list": [ 00:21:14.183 { 00:21:14.183 "name": "spare", 00:21:14.183 "uuid": "72ad9c5b-19df-561d-be1f-86e9ded34365", 00:21:14.183 "is_configured": true, 00:21:14.183 "data_offset": 0, 00:21:14.183 "data_size": 65536 00:21:14.183 }, 00:21:14.183 { 00:21:14.183 "name": "BaseBdev2", 00:21:14.183 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:21:14.183 "is_configured": true, 00:21:14.183 "data_offset": 0, 00:21:14.183 "data_size": 65536 00:21:14.183 } 00:21:14.183 ] 00:21:14.183 }' 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.183 11:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.450 11:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.450 "name": "raid_bdev1", 00:21:14.450 "uuid": "0c392d29-4b62-4ca7-b2fe-aa909a36587a", 00:21:14.450 "strip_size_kb": 0, 00:21:14.450 "state": "online", 00:21:14.450 "raid_level": "raid1", 00:21:14.450 "superblock": false, 00:21:14.450 "num_base_bdevs": 2, 00:21:14.450 "num_base_bdevs_discovered": 2, 00:21:14.450 "num_base_bdevs_operational": 2, 00:21:14.450 "base_bdevs_list": [ 00:21:14.450 { 00:21:14.450 "name": "spare", 00:21:14.450 "uuid": "72ad9c5b-19df-561d-be1f-86e9ded34365", 00:21:14.450 "is_configured": true, 00:21:14.450 "data_offset": 0, 00:21:14.450 "data_size": 65536 00:21:14.450 }, 00:21:14.450 { 00:21:14.450 "name": "BaseBdev2", 00:21:14.450 "uuid": "0c260260-9eb2-5707-82a7-ad58c05eb10a", 00:21:14.450 "is_configured": true, 00:21:14.450 "data_offset": 0, 00:21:14.450 "data_size": 65536 00:21:14.450 } 00:21:14.450 ] 00:21:14.450 }' 00:21:14.450 11:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.450 11:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.016 11:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:15.583 [2024-07-25 11:32:31.159282] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:15.583 [2024-07-25 11:32:31.159329] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.583 [2024-07-25 11:32:31.159435] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.583 [2024-07-25 11:32:31.159530] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.583 [2024-07-25 11:32:31.159553] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:15.583 11:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.583 11:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:15.841 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:16.100 /dev/nbd0 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:16.100 1+0 records in 00:21:16.100 1+0 records out 00:21:16.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683895 s, 6.0 MB/s 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:16.100 11:32:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:16.358 /dev/nbd1 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:16.358 1+0 records in 00:21:16.358 1+0 records out 00:21:16.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707678 s, 5.8 MB/s 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.358 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 85755 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85755 ']' 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85755 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85755 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:16.925 killing process with pid 85755 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85755' 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 85755 00:21:16.925 Received shutdown signal, test time was about 60.000000 seconds 00:21:16.925 00:21:16.925 Latency(us) 00:21:16.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.925 =================================================================================================================== 00:21:16.925 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:16.925 [2024-07-25 11:32:32.793366] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:16.925 11:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 85755 00:21:17.183 [2024-07-25 11:32:33.063195] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:18.556 11:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:21:18.556 00:21:18.556 real 0m25.801s 00:21:18.556 user 0m34.580s 00:21:18.556 sys 0m4.556s 00:21:18.556 11:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:18.556 11:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.556 ************************************ 00:21:18.556 END TEST raid_rebuild_test 00:21:18.556 ************************************ 00:21:18.557 11:32:34 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:21:18.557 11:32:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:18.557 11:32:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:18.557 11:32:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:18.557 ************************************ 00:21:18.557 START TEST raid_rebuild_test_sb 00:21:18.557 ************************************ 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=86281 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 86281 /var/tmp/spdk-raid.sock 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86281 ']' 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.557 11:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.557 [2024-07-25 11:32:34.381413] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:18.557 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:18.557 Zero copy mechanism will not be used. 00:21:18.557 [2024-07-25 11:32:34.381562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86281 ] 00:21:18.815 [2024-07-25 11:32:34.546931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.073 [2024-07-25 11:32:34.787238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.332 [2024-07-25 11:32:34.990300] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:19.332 [2024-07-25 11:32:34.990366] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:19.590 11:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.590 11:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:21:19.590 11:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:21:19.590 11:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:19.848 BaseBdev1_malloc 00:21:19.848 11:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:20.106 [2024-07-25 11:32:35.750529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:20.106 [2024-07-25 11:32:35.750664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.106 [2024-07-25 11:32:35.750708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:20.106 [2024-07-25 11:32:35.750725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.106 [2024-07-25 11:32:35.753573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.106 [2024-07-25 11:32:35.753649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:20.106 BaseBdev1 00:21:20.106 11:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:21:20.106 11:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:20.365 BaseBdev2_malloc 00:21:20.365 11:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:20.623 [2024-07-25 11:32:36.264433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:20.623 [2024-07-25 11:32:36.264555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.623 [2024-07-25 11:32:36.264602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:20.623 [2024-07-25 11:32:36.264650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.623 [2024-07-25 11:32:36.267524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.623 [2024-07-25 11:32:36.267571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:20.623 BaseBdev2 00:21:20.623 11:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:20.881 spare_malloc 00:21:20.881 11:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:20.881 spare_delay 00:21:21.159 11:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:21.159 [2024-07-25 11:32:37.005936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:21.159 [2024-07-25 11:32:37.006048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.159 [2024-07-25 11:32:37.006090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:21.159 [2024-07-25 11:32:37.006105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.159 [2024-07-25 11:32:37.009921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.159 [2024-07-25 11:32:37.009970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:21.159 spare 00:21:21.439 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:21.439 [2024-07-25 11:32:37.282417] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:21.439 [2024-07-25 11:32:37.284908] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:21.439 [2024-07-25 11:32:37.285182] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:21.439 [2024-07-25 11:32:37.285202] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:21.439 [2024-07-25 11:32:37.285600] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:21.439 [2024-07-25 11:32:37.285854] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:21.439 [2024-07-25 11:32:37.285879] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:21.439 [2024-07-25 11:32:37.286085] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.439 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:21.439 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:21.440 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:21.440 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:21.440 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:21.440 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:21.440 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.440 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.440 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.440 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.440 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.440 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.006 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.006 "name": "raid_bdev1", 00:21:22.006 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:22.006 "strip_size_kb": 0, 00:21:22.006 "state": "online", 00:21:22.006 "raid_level": "raid1", 00:21:22.006 "superblock": true, 00:21:22.006 "num_base_bdevs": 2, 00:21:22.006 "num_base_bdevs_discovered": 2, 00:21:22.006 "num_base_bdevs_operational": 2, 00:21:22.006 "base_bdevs_list": [ 00:21:22.006 { 00:21:22.006 "name": "BaseBdev1", 00:21:22.007 "uuid": "f92860e8-6550-5860-8cde-cb0f69803d8a", 00:21:22.007 "is_configured": true, 00:21:22.007 "data_offset": 2048, 00:21:22.007 "data_size": 63488 00:21:22.007 }, 00:21:22.007 { 00:21:22.007 "name": "BaseBdev2", 00:21:22.007 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:22.007 "is_configured": true, 00:21:22.007 "data_offset": 2048, 00:21:22.007 "data_size": 63488 00:21:22.007 } 00:21:22.007 ] 00:21:22.007 }' 00:21:22.007 11:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.007 11:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.573 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:22.573 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:21:22.831 [2024-07-25 11:32:38.483031] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.831 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:21:22.831 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.831 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:23.090 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:23.090 [2024-07-25 11:32:38.954945] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:23.090 /dev/nbd0 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:23.348 1+0 records in 00:21:23.348 1+0 records out 00:21:23.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414103 s, 9.9 MB/s 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:21:23.348 11:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.348 11:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:23.348 11:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:21:23.348 11:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:23.348 11:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:23.348 11:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:21:23.348 11:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:21:23.348 11:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:29.965 63488+0 records in 00:21:29.965 63488+0 records out 00:21:29.965 32505856 bytes (33 MB, 31 MiB) copied, 6.06788 s, 5.4 MB/s 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:29.965 [2024-07-25 11:32:45.409258] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:29.965 [2024-07-25 11:32:45.657951] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.965 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.223 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.223 "name": "raid_bdev1", 00:21:30.223 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:30.223 "strip_size_kb": 0, 00:21:30.223 "state": "online", 00:21:30.223 "raid_level": "raid1", 00:21:30.223 "superblock": true, 00:21:30.223 "num_base_bdevs": 2, 00:21:30.223 "num_base_bdevs_discovered": 1, 00:21:30.223 "num_base_bdevs_operational": 1, 00:21:30.223 "base_bdevs_list": [ 00:21:30.223 { 00:21:30.223 "name": null, 00:21:30.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.223 "is_configured": false, 00:21:30.223 "data_offset": 2048, 00:21:30.223 "data_size": 63488 00:21:30.223 }, 00:21:30.223 { 00:21:30.223 "name": "BaseBdev2", 00:21:30.223 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:30.223 "is_configured": true, 00:21:30.223 "data_offset": 2048, 00:21:30.223 "data_size": 63488 00:21:30.223 } 00:21:30.223 ] 00:21:30.223 }' 00:21:30.223 11:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.223 11:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.794 11:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:31.051 [2024-07-25 11:32:46.910292] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:31.051 [2024-07-25 11:32:46.925503] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:21:31.051 [2024-07-25 11:32:46.927943] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.309 11:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:32.244 11:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.244 11:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:32.244 11:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:32.244 11:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:32.244 11:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:32.244 11:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.244 11:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.503 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:32.503 "name": "raid_bdev1", 00:21:32.503 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:32.503 "strip_size_kb": 0, 00:21:32.503 "state": "online", 00:21:32.503 "raid_level": "raid1", 00:21:32.503 "superblock": true, 00:21:32.503 "num_base_bdevs": 2, 00:21:32.503 "num_base_bdevs_discovered": 2, 00:21:32.503 "num_base_bdevs_operational": 2, 00:21:32.503 "process": { 00:21:32.503 "type": "rebuild", 00:21:32.503 "target": "spare", 00:21:32.503 "progress": { 00:21:32.503 "blocks": 24576, 00:21:32.503 "percent": 38 00:21:32.503 } 00:21:32.503 }, 00:21:32.503 "base_bdevs_list": [ 00:21:32.503 { 00:21:32.503 "name": "spare", 00:21:32.503 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:32.503 "is_configured": true, 00:21:32.503 "data_offset": 2048, 00:21:32.503 "data_size": 63488 00:21:32.503 }, 00:21:32.503 { 00:21:32.503 "name": "BaseBdev2", 00:21:32.503 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:32.503 "is_configured": true, 00:21:32.503 "data_offset": 2048, 00:21:32.503 "data_size": 63488 00:21:32.503 } 00:21:32.503 ] 00:21:32.503 }' 00:21:32.503 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:32.503 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.503 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:32.503 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.503 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:32.761 [2024-07-25 11:32:48.569781] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:32.761 [2024-07-25 11:32:48.640305] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:32.761 [2024-07-25 11:32:48.640377] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.761 [2024-07-25 11:32:48.640405] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:32.761 [2024-07-25 11:32:48.640418] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.019 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.277 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.277 "name": "raid_bdev1", 00:21:33.277 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:33.277 "strip_size_kb": 0, 00:21:33.277 "state": "online", 00:21:33.277 "raid_level": "raid1", 00:21:33.277 "superblock": true, 00:21:33.277 "num_base_bdevs": 2, 00:21:33.277 "num_base_bdevs_discovered": 1, 00:21:33.277 "num_base_bdevs_operational": 1, 00:21:33.277 "base_bdevs_list": [ 00:21:33.277 { 00:21:33.277 "name": null, 00:21:33.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.277 "is_configured": false, 00:21:33.277 "data_offset": 2048, 00:21:33.277 "data_size": 63488 00:21:33.277 }, 00:21:33.277 { 00:21:33.277 "name": "BaseBdev2", 00:21:33.277 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:33.277 "is_configured": true, 00:21:33.277 "data_offset": 2048, 00:21:33.277 "data_size": 63488 00:21:33.277 } 00:21:33.277 ] 00:21:33.277 }' 00:21:33.277 11:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.277 11:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.857 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:33.857 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:33.857 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:33.857 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:33.857 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:33.857 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.857 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.115 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:34.115 "name": "raid_bdev1", 00:21:34.115 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:34.115 "strip_size_kb": 0, 00:21:34.115 "state": "online", 00:21:34.115 "raid_level": "raid1", 00:21:34.115 "superblock": true, 00:21:34.115 "num_base_bdevs": 2, 00:21:34.115 "num_base_bdevs_discovered": 1, 00:21:34.115 "num_base_bdevs_operational": 1, 00:21:34.115 "base_bdevs_list": [ 00:21:34.116 { 00:21:34.116 "name": null, 00:21:34.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.116 "is_configured": false, 00:21:34.116 "data_offset": 2048, 00:21:34.116 "data_size": 63488 00:21:34.116 }, 00:21:34.116 { 00:21:34.116 "name": "BaseBdev2", 00:21:34.116 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:34.116 "is_configured": true, 00:21:34.116 "data_offset": 2048, 00:21:34.116 "data_size": 63488 00:21:34.116 } 00:21:34.116 ] 00:21:34.116 }' 00:21:34.116 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:34.116 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:34.116 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:34.116 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:34.116 11:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:34.380 [2024-07-25 11:32:50.194152] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:34.380 [2024-07-25 11:32:50.208419] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:21:34.380 [2024-07-25 11:32:50.210749] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:34.380 11:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:35.756 "name": "raid_bdev1", 00:21:35.756 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:35.756 "strip_size_kb": 0, 00:21:35.756 "state": "online", 00:21:35.756 "raid_level": "raid1", 00:21:35.756 "superblock": true, 00:21:35.756 "num_base_bdevs": 2, 00:21:35.756 "num_base_bdevs_discovered": 2, 00:21:35.756 "num_base_bdevs_operational": 2, 00:21:35.756 "process": { 00:21:35.756 "type": "rebuild", 00:21:35.756 "target": "spare", 00:21:35.756 "progress": { 00:21:35.756 "blocks": 24576, 00:21:35.756 "percent": 38 00:21:35.756 } 00:21:35.756 }, 00:21:35.756 "base_bdevs_list": [ 00:21:35.756 { 00:21:35.756 "name": "spare", 00:21:35.756 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:35.756 "is_configured": true, 00:21:35.756 "data_offset": 2048, 00:21:35.756 "data_size": 63488 00:21:35.756 }, 00:21:35.756 { 00:21:35.756 "name": "BaseBdev2", 00:21:35.756 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:35.756 "is_configured": true, 00:21:35.756 "data_offset": 2048, 00:21:35.756 "data_size": 63488 00:21:35.756 } 00:21:35.756 ] 00:21:35.756 }' 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:21:35.756 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=935 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.756 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.014 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:36.014 "name": "raid_bdev1", 00:21:36.014 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:36.014 "strip_size_kb": 0, 00:21:36.014 "state": "online", 00:21:36.014 "raid_level": "raid1", 00:21:36.014 "superblock": true, 00:21:36.014 "num_base_bdevs": 2, 00:21:36.014 "num_base_bdevs_discovered": 2, 00:21:36.014 "num_base_bdevs_operational": 2, 00:21:36.014 "process": { 00:21:36.014 "type": "rebuild", 00:21:36.014 "target": "spare", 00:21:36.014 "progress": { 00:21:36.014 "blocks": 32768, 00:21:36.014 "percent": 51 00:21:36.014 } 00:21:36.014 }, 00:21:36.014 "base_bdevs_list": [ 00:21:36.014 { 00:21:36.014 "name": "spare", 00:21:36.014 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:36.014 "is_configured": true, 00:21:36.014 "data_offset": 2048, 00:21:36.014 "data_size": 63488 00:21:36.014 }, 00:21:36.014 { 00:21:36.014 "name": "BaseBdev2", 00:21:36.014 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:36.014 "is_configured": true, 00:21:36.014 "data_offset": 2048, 00:21:36.014 "data_size": 63488 00:21:36.014 } 00:21:36.014 ] 00:21:36.014 }' 00:21:36.015 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:36.272 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:36.272 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:36.272 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:36.272 11:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:37.206 11:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:37.206 11:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.206 11:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:37.206 11:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:37.206 11:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:37.206 11:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:37.206 11:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.206 11:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.464 11:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:37.465 "name": "raid_bdev1", 00:21:37.465 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:37.465 "strip_size_kb": 0, 00:21:37.465 "state": "online", 00:21:37.465 "raid_level": "raid1", 00:21:37.465 "superblock": true, 00:21:37.465 "num_base_bdevs": 2, 00:21:37.465 "num_base_bdevs_discovered": 2, 00:21:37.465 "num_base_bdevs_operational": 2, 00:21:37.465 "process": { 00:21:37.465 "type": "rebuild", 00:21:37.465 "target": "spare", 00:21:37.465 "progress": { 00:21:37.465 "blocks": 59392, 00:21:37.465 "percent": 93 00:21:37.465 } 00:21:37.465 }, 00:21:37.465 "base_bdevs_list": [ 00:21:37.465 { 00:21:37.465 "name": "spare", 00:21:37.465 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:37.465 "is_configured": true, 00:21:37.465 "data_offset": 2048, 00:21:37.465 "data_size": 63488 00:21:37.465 }, 00:21:37.465 { 00:21:37.465 "name": "BaseBdev2", 00:21:37.465 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:37.465 "is_configured": true, 00:21:37.465 "data_offset": 2048, 00:21:37.465 "data_size": 63488 00:21:37.465 } 00:21:37.465 ] 00:21:37.465 }' 00:21:37.465 11:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:37.465 11:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.465 11:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:37.465 [2024-07-25 11:32:53.333208] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:37.465 [2024-07-25 11:32:53.333303] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:37.465 [2024-07-25 11:32:53.333452] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.465 11:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.465 11:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:38.839 "name": "raid_bdev1", 00:21:38.839 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:38.839 "strip_size_kb": 0, 00:21:38.839 "state": "online", 00:21:38.839 "raid_level": "raid1", 00:21:38.839 "superblock": true, 00:21:38.839 "num_base_bdevs": 2, 00:21:38.839 "num_base_bdevs_discovered": 2, 00:21:38.839 "num_base_bdevs_operational": 2, 00:21:38.839 "base_bdevs_list": [ 00:21:38.839 { 00:21:38.839 "name": "spare", 00:21:38.839 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:38.839 "is_configured": true, 00:21:38.839 "data_offset": 2048, 00:21:38.839 "data_size": 63488 00:21:38.839 }, 00:21:38.839 { 00:21:38.839 "name": "BaseBdev2", 00:21:38.839 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:38.839 "is_configured": true, 00:21:38.839 "data_offset": 2048, 00:21:38.839 "data_size": 63488 00:21:38.839 } 00:21:38.839 ] 00:21:38.839 }' 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.839 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.098 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:39.098 "name": "raid_bdev1", 00:21:39.098 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:39.098 "strip_size_kb": 0, 00:21:39.098 "state": "online", 00:21:39.098 "raid_level": "raid1", 00:21:39.098 "superblock": true, 00:21:39.098 "num_base_bdevs": 2, 00:21:39.098 "num_base_bdevs_discovered": 2, 00:21:39.098 "num_base_bdevs_operational": 2, 00:21:39.098 "base_bdevs_list": [ 00:21:39.098 { 00:21:39.098 "name": "spare", 00:21:39.098 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:39.098 "is_configured": true, 00:21:39.098 "data_offset": 2048, 00:21:39.098 "data_size": 63488 00:21:39.098 }, 00:21:39.098 { 00:21:39.098 "name": "BaseBdev2", 00:21:39.098 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:39.098 "is_configured": true, 00:21:39.098 "data_offset": 2048, 00:21:39.098 "data_size": 63488 00:21:39.098 } 00:21:39.098 ] 00:21:39.098 }' 00:21:39.098 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:39.355 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:39.355 11:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:39.355 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:39.355 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:39.355 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:39.355 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:39.355 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:39.355 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:39.355 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:39.355 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.355 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.356 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.356 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.356 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.356 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.614 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.614 "name": "raid_bdev1", 00:21:39.614 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:39.614 "strip_size_kb": 0, 00:21:39.614 "state": "online", 00:21:39.614 "raid_level": "raid1", 00:21:39.614 "superblock": true, 00:21:39.614 "num_base_bdevs": 2, 00:21:39.614 "num_base_bdevs_discovered": 2, 00:21:39.614 "num_base_bdevs_operational": 2, 00:21:39.614 "base_bdevs_list": [ 00:21:39.614 { 00:21:39.614 "name": "spare", 00:21:39.614 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:39.614 "is_configured": true, 00:21:39.614 "data_offset": 2048, 00:21:39.614 "data_size": 63488 00:21:39.614 }, 00:21:39.614 { 00:21:39.614 "name": "BaseBdev2", 00:21:39.614 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:39.614 "is_configured": true, 00:21:39.614 "data_offset": 2048, 00:21:39.614 "data_size": 63488 00:21:39.614 } 00:21:39.614 ] 00:21:39.614 }' 00:21:39.614 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.614 11:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.220 11:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:40.478 [2024-07-25 11:32:56.235546] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.478 [2024-07-25 11:32:56.235592] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.478 [2024-07-25 11:32:56.236028] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.478 [2024-07-25 11:32:56.236134] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.478 [2024-07-25 11:32:56.236157] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:40.478 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:21:40.478 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:40.736 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:40.995 /dev/nbd0 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.254 1+0 records in 00:21:41.254 1+0 records out 00:21:41.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591825 s, 6.9 MB/s 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:41.254 11:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:41.512 /dev/nbd1 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.512 1+0 records in 00:21:41.512 1+0 records out 00:21:41.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461723 s, 8.9 MB/s 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:41.512 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:41.770 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:41.770 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:41.770 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:41.770 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:41.770 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:41.770 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:41.770 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:42.028 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:42.028 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:42.028 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:42.028 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.028 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.028 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:42.028 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:42.028 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.028 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:42.028 11:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:42.286 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:42.286 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:42.286 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:42.286 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.286 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.286 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:42.286 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:42.286 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.286 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:21:42.286 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:42.544 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:42.802 [2024-07-25 11:32:58.522746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:42.802 [2024-07-25 11:32:58.522838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.802 [2024-07-25 11:32:58.522874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:42.802 [2024-07-25 11:32:58.522894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.802 [2024-07-25 11:32:58.525755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.802 [2024-07-25 11:32:58.525810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:42.802 [2024-07-25 11:32:58.525934] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:42.802 [2024-07-25 11:32:58.526010] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:42.802 [2024-07-25 11:32:58.526194] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:42.802 spare 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.802 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.802 [2024-07-25 11:32:58.626336] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:42.802 [2024-07-25 11:32:58.626397] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:42.802 [2024-07-25 11:32:58.626855] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:21:42.802 [2024-07-25 11:32:58.627121] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:42.802 [2024-07-25 11:32:58.627153] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:42.802 [2024-07-25 11:32:58.627379] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.060 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.060 "name": "raid_bdev1", 00:21:43.060 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:43.060 "strip_size_kb": 0, 00:21:43.060 "state": "online", 00:21:43.060 "raid_level": "raid1", 00:21:43.060 "superblock": true, 00:21:43.060 "num_base_bdevs": 2, 00:21:43.060 "num_base_bdevs_discovered": 2, 00:21:43.060 "num_base_bdevs_operational": 2, 00:21:43.060 "base_bdevs_list": [ 00:21:43.060 { 00:21:43.060 "name": "spare", 00:21:43.060 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:43.060 "is_configured": true, 00:21:43.060 "data_offset": 2048, 00:21:43.060 "data_size": 63488 00:21:43.060 }, 00:21:43.060 { 00:21:43.060 "name": "BaseBdev2", 00:21:43.060 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:43.060 "is_configured": true, 00:21:43.060 "data_offset": 2048, 00:21:43.060 "data_size": 63488 00:21:43.060 } 00:21:43.060 ] 00:21:43.060 }' 00:21:43.060 11:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.060 11:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.995 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:43.995 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:43.995 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:43.995 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:43.995 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:43.995 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.995 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.995 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:43.995 "name": "raid_bdev1", 00:21:43.995 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:43.995 "strip_size_kb": 0, 00:21:43.995 "state": "online", 00:21:43.995 "raid_level": "raid1", 00:21:43.995 "superblock": true, 00:21:43.995 "num_base_bdevs": 2, 00:21:43.995 "num_base_bdevs_discovered": 2, 00:21:43.995 "num_base_bdevs_operational": 2, 00:21:43.995 "base_bdevs_list": [ 00:21:43.995 { 00:21:43.995 "name": "spare", 00:21:43.995 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:43.995 "is_configured": true, 00:21:43.995 "data_offset": 2048, 00:21:43.995 "data_size": 63488 00:21:43.995 }, 00:21:43.995 { 00:21:43.995 "name": "BaseBdev2", 00:21:43.995 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:43.995 "is_configured": true, 00:21:43.995 "data_offset": 2048, 00:21:43.995 "data_size": 63488 00:21:43.995 } 00:21:43.995 ] 00:21:43.995 }' 00:21:43.995 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:44.253 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:44.253 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:44.253 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:44.253 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.253 11:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:44.512 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.512 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:44.770 [2024-07-25 11:33:00.495973] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.770 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.029 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.029 "name": "raid_bdev1", 00:21:45.029 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:45.029 "strip_size_kb": 0, 00:21:45.029 "state": "online", 00:21:45.029 "raid_level": "raid1", 00:21:45.029 "superblock": true, 00:21:45.029 "num_base_bdevs": 2, 00:21:45.029 "num_base_bdevs_discovered": 1, 00:21:45.029 "num_base_bdevs_operational": 1, 00:21:45.029 "base_bdevs_list": [ 00:21:45.029 { 00:21:45.029 "name": null, 00:21:45.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.029 "is_configured": false, 00:21:45.029 "data_offset": 2048, 00:21:45.029 "data_size": 63488 00:21:45.029 }, 00:21:45.029 { 00:21:45.029 "name": "BaseBdev2", 00:21:45.029 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:45.029 "is_configured": true, 00:21:45.029 "data_offset": 2048, 00:21:45.029 "data_size": 63488 00:21:45.029 } 00:21:45.029 ] 00:21:45.029 }' 00:21:45.029 11:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.029 11:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.963 11:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:45.964 [2024-07-25 11:33:01.704357] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.964 [2024-07-25 11:33:01.704678] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:45.964 [2024-07-25 11:33:01.704702] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:45.964 [2024-07-25 11:33:01.704756] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.964 [2024-07-25 11:33:01.719582] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:21:45.964 [2024-07-25 11:33:01.722047] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:45.964 11:33:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:21:46.897 11:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.897 11:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:46.897 11:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:46.897 11:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:46.897 11:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:46.897 11:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.897 11:33:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.154 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:47.154 "name": "raid_bdev1", 00:21:47.154 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:47.154 "strip_size_kb": 0, 00:21:47.154 "state": "online", 00:21:47.154 "raid_level": "raid1", 00:21:47.155 "superblock": true, 00:21:47.155 "num_base_bdevs": 2, 00:21:47.155 "num_base_bdevs_discovered": 2, 00:21:47.155 "num_base_bdevs_operational": 2, 00:21:47.155 "process": { 00:21:47.155 "type": "rebuild", 00:21:47.155 "target": "spare", 00:21:47.155 "progress": { 00:21:47.155 "blocks": 24576, 00:21:47.155 "percent": 38 00:21:47.155 } 00:21:47.155 }, 00:21:47.155 "base_bdevs_list": [ 00:21:47.155 { 00:21:47.155 "name": "spare", 00:21:47.155 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:47.155 "is_configured": true, 00:21:47.155 "data_offset": 2048, 00:21:47.155 "data_size": 63488 00:21:47.155 }, 00:21:47.155 { 00:21:47.155 "name": "BaseBdev2", 00:21:47.155 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:47.155 "is_configured": true, 00:21:47.155 "data_offset": 2048, 00:21:47.155 "data_size": 63488 00:21:47.155 } 00:21:47.155 ] 00:21:47.155 }' 00:21:47.155 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:47.413 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:47.413 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:47.413 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:47.413 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:47.671 [2024-07-25 11:33:03.359578] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:47.671 [2024-07-25 11:33:03.434885] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:47.671 [2024-07-25 11:33:03.434990] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.671 [2024-07-25 11:33:03.435020] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:47.671 [2024-07-25 11:33:03.435033] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.671 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.928 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.928 "name": "raid_bdev1", 00:21:47.928 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:47.928 "strip_size_kb": 0, 00:21:47.928 "state": "online", 00:21:47.928 "raid_level": "raid1", 00:21:47.928 "superblock": true, 00:21:47.928 "num_base_bdevs": 2, 00:21:47.928 "num_base_bdevs_discovered": 1, 00:21:47.928 "num_base_bdevs_operational": 1, 00:21:47.928 "base_bdevs_list": [ 00:21:47.928 { 00:21:47.928 "name": null, 00:21:47.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.928 "is_configured": false, 00:21:47.928 "data_offset": 2048, 00:21:47.928 "data_size": 63488 00:21:47.928 }, 00:21:47.928 { 00:21:47.928 "name": "BaseBdev2", 00:21:47.928 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:47.928 "is_configured": true, 00:21:47.928 "data_offset": 2048, 00:21:47.928 "data_size": 63488 00:21:47.928 } 00:21:47.928 ] 00:21:47.928 }' 00:21:47.928 11:33:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.928 11:33:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.864 11:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:48.864 [2024-07-25 11:33:04.689270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:48.864 [2024-07-25 11:33:04.689365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.864 [2024-07-25 11:33:04.689409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:48.864 [2024-07-25 11:33:04.689425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.864 [2024-07-25 11:33:04.690091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.864 [2024-07-25 11:33:04.690125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:48.864 [2024-07-25 11:33:04.690249] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:48.864 [2024-07-25 11:33:04.690270] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:48.864 [2024-07-25 11:33:04.690288] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:48.864 [2024-07-25 11:33:04.690316] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:48.864 [2024-07-25 11:33:04.704854] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:21:48.864 spare 00:21:48.864 [2024-07-25 11:33:04.707309] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:48.864 11:33:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:21:50.274 11:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.274 11:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:50.274 11:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:50.274 11:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:50.274 11:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:50.274 11:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.274 11:33:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.274 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:50.274 "name": "raid_bdev1", 00:21:50.274 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:50.274 "strip_size_kb": 0, 00:21:50.274 "state": "online", 00:21:50.274 "raid_level": "raid1", 00:21:50.274 "superblock": true, 00:21:50.274 "num_base_bdevs": 2, 00:21:50.274 "num_base_bdevs_discovered": 2, 00:21:50.274 "num_base_bdevs_operational": 2, 00:21:50.274 "process": { 00:21:50.274 "type": "rebuild", 00:21:50.274 "target": "spare", 00:21:50.274 "progress": { 00:21:50.274 "blocks": 26624, 00:21:50.274 "percent": 41 00:21:50.274 } 00:21:50.274 }, 00:21:50.274 "base_bdevs_list": [ 00:21:50.274 { 00:21:50.274 "name": "spare", 00:21:50.274 "uuid": "b6a18190-7dcd-59ef-9d32-1559f75fc9e4", 00:21:50.274 "is_configured": true, 00:21:50.274 "data_offset": 2048, 00:21:50.274 "data_size": 63488 00:21:50.274 }, 00:21:50.274 { 00:21:50.274 "name": "BaseBdev2", 00:21:50.274 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:50.274 "is_configured": true, 00:21:50.274 "data_offset": 2048, 00:21:50.274 "data_size": 63488 00:21:50.274 } 00:21:50.274 ] 00:21:50.274 }' 00:21:50.274 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:50.274 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.274 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:50.530 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.530 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:50.787 [2024-07-25 11:33:06.434921] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:50.787 [2024-07-25 11:33:06.520940] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:50.787 [2024-07-25 11:33:06.521061] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.787 [2024-07-25 11:33:06.521087] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:50.787 [2024-07-25 11:33:06.521104] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.787 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.045 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.045 "name": "raid_bdev1", 00:21:51.045 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:51.045 "strip_size_kb": 0, 00:21:51.045 "state": "online", 00:21:51.045 "raid_level": "raid1", 00:21:51.045 "superblock": true, 00:21:51.045 "num_base_bdevs": 2, 00:21:51.045 "num_base_bdevs_discovered": 1, 00:21:51.045 "num_base_bdevs_operational": 1, 00:21:51.045 "base_bdevs_list": [ 00:21:51.045 { 00:21:51.045 "name": null, 00:21:51.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.045 "is_configured": false, 00:21:51.045 "data_offset": 2048, 00:21:51.045 "data_size": 63488 00:21:51.045 }, 00:21:51.045 { 00:21:51.045 "name": "BaseBdev2", 00:21:51.045 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:51.045 "is_configured": true, 00:21:51.045 "data_offset": 2048, 00:21:51.045 "data_size": 63488 00:21:51.045 } 00:21:51.045 ] 00:21:51.045 }' 00:21:51.045 11:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.045 11:33:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.976 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:51.976 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:51.976 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:51.976 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:51.976 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:51.976 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.976 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.976 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:51.976 "name": "raid_bdev1", 00:21:51.976 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:51.976 "strip_size_kb": 0, 00:21:51.976 "state": "online", 00:21:51.976 "raid_level": "raid1", 00:21:51.976 "superblock": true, 00:21:51.976 "num_base_bdevs": 2, 00:21:51.976 "num_base_bdevs_discovered": 1, 00:21:51.976 "num_base_bdevs_operational": 1, 00:21:51.976 "base_bdevs_list": [ 00:21:51.976 { 00:21:51.976 "name": null, 00:21:51.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.976 "is_configured": false, 00:21:51.976 "data_offset": 2048, 00:21:51.976 "data_size": 63488 00:21:51.976 }, 00:21:51.976 { 00:21:51.976 "name": "BaseBdev2", 00:21:51.976 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:51.976 "is_configured": true, 00:21:51.976 "data_offset": 2048, 00:21:51.976 "data_size": 63488 00:21:51.976 } 00:21:51.976 ] 00:21:51.976 }' 00:21:51.976 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:52.233 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:52.233 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:52.233 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:52.233 11:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:52.502 11:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:52.770 [2024-07-25 11:33:08.439170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:52.770 [2024-07-25 11:33:08.439277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.770 [2024-07-25 11:33:08.439312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:52.770 [2024-07-25 11:33:08.439332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.770 [2024-07-25 11:33:08.439924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.770 [2024-07-25 11:33:08.439967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:52.770 [2024-07-25 11:33:08.440077] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:52.770 [2024-07-25 11:33:08.440105] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:52.770 [2024-07-25 11:33:08.440118] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:52.770 BaseBdev1 00:21:52.770 11:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.705 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.964 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.964 "name": "raid_bdev1", 00:21:53.964 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:53.964 "strip_size_kb": 0, 00:21:53.964 "state": "online", 00:21:53.965 "raid_level": "raid1", 00:21:53.965 "superblock": true, 00:21:53.965 "num_base_bdevs": 2, 00:21:53.965 "num_base_bdevs_discovered": 1, 00:21:53.965 "num_base_bdevs_operational": 1, 00:21:53.965 "base_bdevs_list": [ 00:21:53.965 { 00:21:53.965 "name": null, 00:21:53.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.965 "is_configured": false, 00:21:53.965 "data_offset": 2048, 00:21:53.965 "data_size": 63488 00:21:53.965 }, 00:21:53.965 { 00:21:53.965 "name": "BaseBdev2", 00:21:53.965 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:53.965 "is_configured": true, 00:21:53.965 "data_offset": 2048, 00:21:53.965 "data_size": 63488 00:21:53.965 } 00:21:53.965 ] 00:21:53.965 }' 00:21:53.965 11:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.965 11:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.530 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.530 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:54.530 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:54.530 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:54.530 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:54.530 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.530 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:55.096 "name": "raid_bdev1", 00:21:55.096 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:55.096 "strip_size_kb": 0, 00:21:55.096 "state": "online", 00:21:55.096 "raid_level": "raid1", 00:21:55.096 "superblock": true, 00:21:55.096 "num_base_bdevs": 2, 00:21:55.096 "num_base_bdevs_discovered": 1, 00:21:55.096 "num_base_bdevs_operational": 1, 00:21:55.096 "base_bdevs_list": [ 00:21:55.096 { 00:21:55.096 "name": null, 00:21:55.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.096 "is_configured": false, 00:21:55.096 "data_offset": 2048, 00:21:55.096 "data_size": 63488 00:21:55.096 }, 00:21:55.096 { 00:21:55.096 "name": "BaseBdev2", 00:21:55.096 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:55.096 "is_configured": true, 00:21:55.096 "data_offset": 2048, 00:21:55.096 "data_size": 63488 00:21:55.096 } 00:21:55.096 ] 00:21:55.096 }' 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:55.096 11:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:55.355 [2024-07-25 11:33:10.987854] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:55.355 [2024-07-25 11:33:10.988075] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:55.355 [2024-07-25 11:33:10.988105] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:55.355 request: 00:21:55.355 { 00:21:55.355 "base_bdev": "BaseBdev1", 00:21:55.355 "raid_bdev": "raid_bdev1", 00:21:55.355 "method": "bdev_raid_add_base_bdev", 00:21:55.355 "req_id": 1 00:21:55.355 } 00:21:55.355 Got JSON-RPC error response 00:21:55.355 response: 00:21:55.355 { 00:21:55.355 "code": -22, 00:21:55.355 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:55.355 } 00:21:55.355 11:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:21:55.355 11:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:55.355 11:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:55.355 11:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:55.355 11:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.288 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.546 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.546 "name": "raid_bdev1", 00:21:56.546 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:56.546 "strip_size_kb": 0, 00:21:56.546 "state": "online", 00:21:56.546 "raid_level": "raid1", 00:21:56.546 "superblock": true, 00:21:56.546 "num_base_bdevs": 2, 00:21:56.546 "num_base_bdevs_discovered": 1, 00:21:56.546 "num_base_bdevs_operational": 1, 00:21:56.546 "base_bdevs_list": [ 00:21:56.546 { 00:21:56.546 "name": null, 00:21:56.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.546 "is_configured": false, 00:21:56.546 "data_offset": 2048, 00:21:56.546 "data_size": 63488 00:21:56.546 }, 00:21:56.546 { 00:21:56.546 "name": "BaseBdev2", 00:21:56.546 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:56.546 "is_configured": true, 00:21:56.546 "data_offset": 2048, 00:21:56.546 "data_size": 63488 00:21:56.546 } 00:21:56.546 ] 00:21:56.546 }' 00:21:56.546 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.547 11:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:57.112 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:57.112 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:57.112 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:57.112 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:57.112 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:57.112 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.112 11:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.370 11:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:57.371 "name": "raid_bdev1", 00:21:57.371 "uuid": "80eece3d-d848-4337-a3d7-9cbe396209f2", 00:21:57.371 "strip_size_kb": 0, 00:21:57.371 "state": "online", 00:21:57.371 "raid_level": "raid1", 00:21:57.371 "superblock": true, 00:21:57.371 "num_base_bdevs": 2, 00:21:57.371 "num_base_bdevs_discovered": 1, 00:21:57.371 "num_base_bdevs_operational": 1, 00:21:57.371 "base_bdevs_list": [ 00:21:57.371 { 00:21:57.371 "name": null, 00:21:57.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.371 "is_configured": false, 00:21:57.371 "data_offset": 2048, 00:21:57.371 "data_size": 63488 00:21:57.371 }, 00:21:57.371 { 00:21:57.371 "name": "BaseBdev2", 00:21:57.371 "uuid": "10560220-21cb-57e1-bb22-0dc022857baa", 00:21:57.371 "is_configured": true, 00:21:57.371 "data_offset": 2048, 00:21:57.371 "data_size": 63488 00:21:57.371 } 00:21:57.371 ] 00:21:57.371 }' 00:21:57.371 11:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 86281 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86281 ']' 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86281 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86281 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:57.628 killing process with pid 86281 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86281' 00:21:57.628 Received shutdown signal, test time was about 60.000000 seconds 00:21:57.628 00:21:57.628 Latency(us) 00:21:57.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.628 =================================================================================================================== 00:21:57.628 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86281 00:21:57.628 [2024-07-25 11:33:13.333015] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:57.628 11:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86281 00:21:57.628 [2024-07-25 11:33:13.333175] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.628 [2024-07-25 11:33:13.333248] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.628 [2024-07-25 11:33:13.333268] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:57.885 [2024-07-25 11:33:13.601094] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:21:59.256 00:21:59.256 real 0m40.468s 00:21:59.256 user 0m59.940s 00:21:59.256 sys 0m5.723s 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.256 ************************************ 00:21:59.256 END TEST raid_rebuild_test_sb 00:21:59.256 ************************************ 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:59.256 11:33:14 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:21:59.256 11:33:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:59.256 11:33:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.256 11:33:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:59.256 ************************************ 00:21:59.256 START TEST raid_rebuild_test_io 00:21:59.256 ************************************ 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:21:59.256 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=87192 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 87192 /var/tmp/spdk-raid.sock 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87192 ']' 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.257 11:33:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:59.257 [2024-07-25 11:33:14.915577] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:21:59.257 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:59.257 Zero copy mechanism will not be used. 00:21:59.257 [2024-07-25 11:33:14.915778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87192 ] 00:21:59.257 [2024-07-25 11:33:15.089539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.515 [2024-07-25 11:33:15.328549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.772 [2024-07-25 11:33:15.531646] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:59.772 [2024-07-25 11:33:15.531740] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.030 11:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.030 11:33:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:22:00.030 11:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:00.030 11:33:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:00.288 BaseBdev1_malloc 00:22:00.288 11:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:00.594 [2024-07-25 11:33:16.331767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:00.594 [2024-07-25 11:33:16.331865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.594 [2024-07-25 11:33:16.331907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:00.594 [2024-07-25 11:33:16.331923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.594 [2024-07-25 11:33:16.334779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.594 [2024-07-25 11:33:16.334829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:00.594 BaseBdev1 00:22:00.594 11:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:00.594 11:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:00.865 BaseBdev2_malloc 00:22:00.865 11:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:01.137 [2024-07-25 11:33:16.906945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:01.137 [2024-07-25 11:33:16.907050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.137 [2024-07-25 11:33:16.907091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:01.137 [2024-07-25 11:33:16.907108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.137 [2024-07-25 11:33:16.909975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.137 [2024-07-25 11:33:16.910028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:01.137 BaseBdev2 00:22:01.137 11:33:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:01.395 spare_malloc 00:22:01.395 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:01.652 spare_delay 00:22:01.652 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:01.910 [2024-07-25 11:33:17.682967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:01.910 [2024-07-25 11:33:17.683061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.910 [2024-07-25 11:33:17.683102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:01.910 [2024-07-25 11:33:17.683118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.910 [2024-07-25 11:33:17.685921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.910 [2024-07-25 11:33:17.685968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:01.910 spare 00:22:01.910 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:02.167 [2024-07-25 11:33:17.915082] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.167 [2024-07-25 11:33:17.917475] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.167 [2024-07-25 11:33:17.917665] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:02.167 [2024-07-25 11:33:17.917688] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:02.167 [2024-07-25 11:33:17.918105] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:02.168 [2024-07-25 11:33:17.918332] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:02.168 [2024-07-25 11:33:17.918366] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:02.168 [2024-07-25 11:33:17.918584] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.168 11:33:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.425 11:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.425 "name": "raid_bdev1", 00:22:02.426 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:02.426 "strip_size_kb": 0, 00:22:02.426 "state": "online", 00:22:02.426 "raid_level": "raid1", 00:22:02.426 "superblock": false, 00:22:02.426 "num_base_bdevs": 2, 00:22:02.426 "num_base_bdevs_discovered": 2, 00:22:02.426 "num_base_bdevs_operational": 2, 00:22:02.426 "base_bdevs_list": [ 00:22:02.426 { 00:22:02.426 "name": "BaseBdev1", 00:22:02.426 "uuid": "e325ccdb-a1e7-5609-9448-1f6e9b11f8f5", 00:22:02.426 "is_configured": true, 00:22:02.426 "data_offset": 0, 00:22:02.426 "data_size": 65536 00:22:02.426 }, 00:22:02.426 { 00:22:02.426 "name": "BaseBdev2", 00:22:02.426 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:02.426 "is_configured": true, 00:22:02.426 "data_offset": 0, 00:22:02.426 "data_size": 65536 00:22:02.426 } 00:22:02.426 ] 00:22:02.426 }' 00:22:02.426 11:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.426 11:33:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:02.990 11:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:02.990 11:33:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:22:03.246 [2024-07-25 11:33:19.063697] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.246 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:22:03.246 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.246 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:03.503 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:22:03.504 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:22:03.504 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:03.504 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:03.798 [2024-07-25 11:33:19.447608] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:03.798 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:03.798 Zero copy mechanism will not be used. 00:22:03.798 Running I/O for 60 seconds... 00:22:03.798 [2024-07-25 11:33:19.543169] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:03.798 [2024-07-25 11:33:19.550678] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.798 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.056 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.056 "name": "raid_bdev1", 00:22:04.056 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:04.056 "strip_size_kb": 0, 00:22:04.056 "state": "online", 00:22:04.056 "raid_level": "raid1", 00:22:04.056 "superblock": false, 00:22:04.056 "num_base_bdevs": 2, 00:22:04.056 "num_base_bdevs_discovered": 1, 00:22:04.056 "num_base_bdevs_operational": 1, 00:22:04.056 "base_bdevs_list": [ 00:22:04.056 { 00:22:04.056 "name": null, 00:22:04.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.056 "is_configured": false, 00:22:04.056 "data_offset": 0, 00:22:04.056 "data_size": 65536 00:22:04.056 }, 00:22:04.056 { 00:22:04.056 "name": "BaseBdev2", 00:22:04.056 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:04.056 "is_configured": true, 00:22:04.056 "data_offset": 0, 00:22:04.056 "data_size": 65536 00:22:04.056 } 00:22:04.056 ] 00:22:04.056 }' 00:22:04.056 11:33:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.056 11:33:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:04.990 11:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:04.990 [2024-07-25 11:33:20.727470] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:04.990 11:33:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:04.990 [2024-07-25 11:33:20.822842] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:04.990 [2024-07-25 11:33:20.825455] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:05.248 [2024-07-25 11:33:20.935970] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:05.248 [2024-07-25 11:33:20.936738] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:05.506 [2024-07-25 11:33:21.165850] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:05.506 [2024-07-25 11:33:21.166293] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:05.764 [2024-07-25 11:33:21.500927] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:06.023 [2024-07-25 11:33:21.711977] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:06.023 [2024-07-25 11:33:21.712445] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:06.023 11:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:06.023 11:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:06.023 11:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:06.023 11:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:06.023 11:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:06.023 11:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.024 11:33:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.282 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:06.282 "name": "raid_bdev1", 00:22:06.282 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:06.282 "strip_size_kb": 0, 00:22:06.282 "state": "online", 00:22:06.282 "raid_level": "raid1", 00:22:06.282 "superblock": false, 00:22:06.282 "num_base_bdevs": 2, 00:22:06.282 "num_base_bdevs_discovered": 2, 00:22:06.282 "num_base_bdevs_operational": 2, 00:22:06.282 "process": { 00:22:06.282 "type": "rebuild", 00:22:06.282 "target": "spare", 00:22:06.282 "progress": { 00:22:06.282 "blocks": 14336, 00:22:06.282 "percent": 21 00:22:06.282 } 00:22:06.282 }, 00:22:06.282 "base_bdevs_list": [ 00:22:06.282 { 00:22:06.282 "name": "spare", 00:22:06.282 "uuid": "7edb8761-3fe7-5673-bcd1-aa5424c2df81", 00:22:06.282 "is_configured": true, 00:22:06.282 "data_offset": 0, 00:22:06.282 "data_size": 65536 00:22:06.282 }, 00:22:06.282 { 00:22:06.282 "name": "BaseBdev2", 00:22:06.282 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:06.282 "is_configured": true, 00:22:06.282 "data_offset": 0, 00:22:06.282 "data_size": 65536 00:22:06.282 } 00:22:06.282 ] 00:22:06.282 }' 00:22:06.282 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:06.540 [2024-07-25 11:33:22.182948] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:06.540 [2024-07-25 11:33:22.183266] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:06.540 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:06.540 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:06.540 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.540 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:06.809 [2024-07-25 11:33:22.462898] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:06.809 [2024-07-25 11:33:22.532367] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:06.809 [2024-07-25 11:33:22.542249] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.809 [2024-07-25 11:33:22.542320] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:06.809 [2024-07-25 11:33:22.542340] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:06.809 [2024-07-25 11:33:22.574738] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.809 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.068 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.068 "name": "raid_bdev1", 00:22:07.068 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:07.068 "strip_size_kb": 0, 00:22:07.068 "state": "online", 00:22:07.068 "raid_level": "raid1", 00:22:07.068 "superblock": false, 00:22:07.068 "num_base_bdevs": 2, 00:22:07.068 "num_base_bdevs_discovered": 1, 00:22:07.068 "num_base_bdevs_operational": 1, 00:22:07.068 "base_bdevs_list": [ 00:22:07.068 { 00:22:07.068 "name": null, 00:22:07.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.068 "is_configured": false, 00:22:07.068 "data_offset": 0, 00:22:07.068 "data_size": 65536 00:22:07.068 }, 00:22:07.068 { 00:22:07.068 "name": "BaseBdev2", 00:22:07.068 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:07.068 "is_configured": true, 00:22:07.068 "data_offset": 0, 00:22:07.068 "data_size": 65536 00:22:07.068 } 00:22:07.068 ] 00:22:07.068 }' 00:22:07.068 11:33:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.068 11:33:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:08.001 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:08.001 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:08.001 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:08.001 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:08.001 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:08.001 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.001 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.001 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:08.001 "name": "raid_bdev1", 00:22:08.001 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:08.001 "strip_size_kb": 0, 00:22:08.001 "state": "online", 00:22:08.001 "raid_level": "raid1", 00:22:08.001 "superblock": false, 00:22:08.001 "num_base_bdevs": 2, 00:22:08.001 "num_base_bdevs_discovered": 1, 00:22:08.001 "num_base_bdevs_operational": 1, 00:22:08.001 "base_bdevs_list": [ 00:22:08.001 { 00:22:08.001 "name": null, 00:22:08.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.001 "is_configured": false, 00:22:08.001 "data_offset": 0, 00:22:08.001 "data_size": 65536 00:22:08.001 }, 00:22:08.001 { 00:22:08.001 "name": "BaseBdev2", 00:22:08.001 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:08.001 "is_configured": true, 00:22:08.001 "data_offset": 0, 00:22:08.001 "data_size": 65536 00:22:08.001 } 00:22:08.001 ] 00:22:08.001 }' 00:22:08.001 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:08.001 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:08.002 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:08.260 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:08.260 11:33:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:08.518 [2024-07-25 11:33:24.194257] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:08.518 [2024-07-25 11:33:24.247139] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:08.518 [2024-07-25 11:33:24.249601] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:08.518 11:33:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:22:08.518 [2024-07-25 11:33:24.368654] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:08.518 [2024-07-25 11:33:24.369344] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:08.777 [2024-07-25 11:33:24.597356] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:08.777 [2024-07-25 11:33:24.597791] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:09.342 [2024-07-25 11:33:24.975325] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:09.342 [2024-07-25 11:33:25.213586] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:09.342 [2024-07-25 11:33:25.214005] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:09.600 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:09.600 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:09.600 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:09.600 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:09.600 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:09.600 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.600 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.858 [2024-07-25 11:33:25.530157] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:09.858 [2024-07-25 11:33:25.530905] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.858 "name": "raid_bdev1", 00:22:09.858 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:09.858 "strip_size_kb": 0, 00:22:09.858 "state": "online", 00:22:09.858 "raid_level": "raid1", 00:22:09.858 "superblock": false, 00:22:09.858 "num_base_bdevs": 2, 00:22:09.858 "num_base_bdevs_discovered": 2, 00:22:09.858 "num_base_bdevs_operational": 2, 00:22:09.858 "process": { 00:22:09.858 "type": "rebuild", 00:22:09.858 "target": "spare", 00:22:09.858 "progress": { 00:22:09.858 "blocks": 14336, 00:22:09.858 "percent": 21 00:22:09.858 } 00:22:09.858 }, 00:22:09.858 "base_bdevs_list": [ 00:22:09.858 { 00:22:09.858 "name": "spare", 00:22:09.858 "uuid": "7edb8761-3fe7-5673-bcd1-aa5424c2df81", 00:22:09.858 "is_configured": true, 00:22:09.858 "data_offset": 0, 00:22:09.858 "data_size": 65536 00:22:09.858 }, 00:22:09.858 { 00:22:09.858 "name": "BaseBdev2", 00:22:09.858 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:09.858 "is_configured": true, 00:22:09.858 "data_offset": 0, 00:22:09.858 "data_size": 65536 00:22:09.858 } 00:22:09.858 ] 00:22:09.858 }' 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=969 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.858 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.116 [2024-07-25 11:33:25.778888] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:10.116 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:10.116 "name": "raid_bdev1", 00:22:10.116 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:10.116 "strip_size_kb": 0, 00:22:10.116 "state": "online", 00:22:10.116 "raid_level": "raid1", 00:22:10.116 "superblock": false, 00:22:10.116 "num_base_bdevs": 2, 00:22:10.116 "num_base_bdevs_discovered": 2, 00:22:10.116 "num_base_bdevs_operational": 2, 00:22:10.116 "process": { 00:22:10.116 "type": "rebuild", 00:22:10.116 "target": "spare", 00:22:10.116 "progress": { 00:22:10.116 "blocks": 18432, 00:22:10.116 "percent": 28 00:22:10.116 } 00:22:10.116 }, 00:22:10.116 "base_bdevs_list": [ 00:22:10.116 { 00:22:10.116 "name": "spare", 00:22:10.116 "uuid": "7edb8761-3fe7-5673-bcd1-aa5424c2df81", 00:22:10.116 "is_configured": true, 00:22:10.116 "data_offset": 0, 00:22:10.116 "data_size": 65536 00:22:10.116 }, 00:22:10.116 { 00:22:10.116 "name": "BaseBdev2", 00:22:10.116 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:10.116 "is_configured": true, 00:22:10.116 "data_offset": 0, 00:22:10.116 "data_size": 65536 00:22:10.116 } 00:22:10.116 ] 00:22:10.116 }' 00:22:10.116 11:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:10.373 11:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.373 11:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:10.373 [2024-07-25 11:33:26.039471] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:10.373 11:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.373 11:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:10.631 [2024-07-25 11:33:26.277102] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:10.631 [2024-07-25 11:33:26.277501] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:10.890 [2024-07-25 11:33:26.760820] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:11.455 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:11.455 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.455 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:11.455 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:11.455 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:11.455 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:11.455 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.455 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.455 [2024-07-25 11:33:27.102702] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:11.455 [2024-07-25 11:33:27.241273] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:11.713 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:11.713 "name": "raid_bdev1", 00:22:11.713 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:11.713 "strip_size_kb": 0, 00:22:11.713 "state": "online", 00:22:11.713 "raid_level": "raid1", 00:22:11.713 "superblock": false, 00:22:11.713 "num_base_bdevs": 2, 00:22:11.713 "num_base_bdevs_discovered": 2, 00:22:11.713 "num_base_bdevs_operational": 2, 00:22:11.713 "process": { 00:22:11.713 "type": "rebuild", 00:22:11.713 "target": "spare", 00:22:11.713 "progress": { 00:22:11.713 "blocks": 34816, 00:22:11.713 "percent": 53 00:22:11.713 } 00:22:11.713 }, 00:22:11.713 "base_bdevs_list": [ 00:22:11.713 { 00:22:11.713 "name": "spare", 00:22:11.713 "uuid": "7edb8761-3fe7-5673-bcd1-aa5424c2df81", 00:22:11.713 "is_configured": true, 00:22:11.713 "data_offset": 0, 00:22:11.713 "data_size": 65536 00:22:11.713 }, 00:22:11.713 { 00:22:11.713 "name": "BaseBdev2", 00:22:11.713 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:11.713 "is_configured": true, 00:22:11.713 "data_offset": 0, 00:22:11.713 "data_size": 65536 00:22:11.713 } 00:22:11.713 ] 00:22:11.713 }' 00:22:11.713 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:11.713 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.713 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:11.713 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.713 11:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:11.972 [2024-07-25 11:33:27.613052] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:12.230 [2024-07-25 11:33:27.944309] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:12.488 [2024-07-25 11:33:28.164091] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:12.488 [2024-07-25 11:33:28.164478] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:12.779 [2024-07-25 11:33:28.380529] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:12.779 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:12.779 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.779 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:12.779 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:12.779 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:12.779 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:12.779 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.779 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.779 [2024-07-25 11:33:28.583714] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:12.779 [2024-07-25 11:33:28.584115] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:13.037 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:13.037 "name": "raid_bdev1", 00:22:13.037 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:13.037 "strip_size_kb": 0, 00:22:13.037 "state": "online", 00:22:13.037 "raid_level": "raid1", 00:22:13.037 "superblock": false, 00:22:13.037 "num_base_bdevs": 2, 00:22:13.037 "num_base_bdevs_discovered": 2, 00:22:13.037 "num_base_bdevs_operational": 2, 00:22:13.037 "process": { 00:22:13.037 "type": "rebuild", 00:22:13.037 "target": "spare", 00:22:13.037 "progress": { 00:22:13.037 "blocks": 53248, 00:22:13.037 "percent": 81 00:22:13.037 } 00:22:13.037 }, 00:22:13.037 "base_bdevs_list": [ 00:22:13.037 { 00:22:13.037 "name": "spare", 00:22:13.037 "uuid": "7edb8761-3fe7-5673-bcd1-aa5424c2df81", 00:22:13.037 "is_configured": true, 00:22:13.037 "data_offset": 0, 00:22:13.037 "data_size": 65536 00:22:13.037 }, 00:22:13.037 { 00:22:13.037 "name": "BaseBdev2", 00:22:13.037 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:13.037 "is_configured": true, 00:22:13.037 "data_offset": 0, 00:22:13.037 "data_size": 65536 00:22:13.037 } 00:22:13.037 ] 00:22:13.037 }' 00:22:13.037 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:13.037 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.037 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:13.295 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.295 11:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:13.295 [2024-07-25 11:33:29.015204] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:13.554 [2024-07-25 11:33:29.355035] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:13.812 [2024-07-25 11:33:29.455035] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:13.812 [2024-07-25 11:33:29.457412] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.071 11:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:14.071 11:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.071 11:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:14.071 11:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:14.071 11:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:14.072 11:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:14.072 11:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.072 11:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.638 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.638 "name": "raid_bdev1", 00:22:14.638 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:14.638 "strip_size_kb": 0, 00:22:14.638 "state": "online", 00:22:14.638 "raid_level": "raid1", 00:22:14.638 "superblock": false, 00:22:14.638 "num_base_bdevs": 2, 00:22:14.638 "num_base_bdevs_discovered": 2, 00:22:14.638 "num_base_bdevs_operational": 2, 00:22:14.638 "base_bdevs_list": [ 00:22:14.638 { 00:22:14.638 "name": "spare", 00:22:14.638 "uuid": "7edb8761-3fe7-5673-bcd1-aa5424c2df81", 00:22:14.638 "is_configured": true, 00:22:14.638 "data_offset": 0, 00:22:14.638 "data_size": 65536 00:22:14.638 }, 00:22:14.638 { 00:22:14.638 "name": "BaseBdev2", 00:22:14.638 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:14.638 "is_configured": true, 00:22:14.638 "data_offset": 0, 00:22:14.638 "data_size": 65536 00:22:14.638 } 00:22:14.638 ] 00:22:14.638 }' 00:22:14.638 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:14.638 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:14.638 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:14.638 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:14.638 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:22:14.638 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.638 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:14.638 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:14.639 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:14.639 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:14.639 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.639 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.896 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.896 "name": "raid_bdev1", 00:22:14.896 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:14.896 "strip_size_kb": 0, 00:22:14.896 "state": "online", 00:22:14.896 "raid_level": "raid1", 00:22:14.897 "superblock": false, 00:22:14.897 "num_base_bdevs": 2, 00:22:14.897 "num_base_bdevs_discovered": 2, 00:22:14.897 "num_base_bdevs_operational": 2, 00:22:14.897 "base_bdevs_list": [ 00:22:14.897 { 00:22:14.897 "name": "spare", 00:22:14.897 "uuid": "7edb8761-3fe7-5673-bcd1-aa5424c2df81", 00:22:14.897 "is_configured": true, 00:22:14.897 "data_offset": 0, 00:22:14.897 "data_size": 65536 00:22:14.897 }, 00:22:14.897 { 00:22:14.897 "name": "BaseBdev2", 00:22:14.897 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:14.897 "is_configured": true, 00:22:14.897 "data_offset": 0, 00:22:14.897 "data_size": 65536 00:22:14.897 } 00:22:14.897 ] 00:22:14.897 }' 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.897 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.155 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:15.155 "name": "raid_bdev1", 00:22:15.155 "uuid": "890831a3-853c-485f-b2dd-bacfdd8175d8", 00:22:15.155 "strip_size_kb": 0, 00:22:15.155 "state": "online", 00:22:15.155 "raid_level": "raid1", 00:22:15.155 "superblock": false, 00:22:15.155 "num_base_bdevs": 2, 00:22:15.155 "num_base_bdevs_discovered": 2, 00:22:15.155 "num_base_bdevs_operational": 2, 00:22:15.155 "base_bdevs_list": [ 00:22:15.155 { 00:22:15.155 "name": "spare", 00:22:15.155 "uuid": "7edb8761-3fe7-5673-bcd1-aa5424c2df81", 00:22:15.155 "is_configured": true, 00:22:15.155 "data_offset": 0, 00:22:15.155 "data_size": 65536 00:22:15.155 }, 00:22:15.155 { 00:22:15.155 "name": "BaseBdev2", 00:22:15.155 "uuid": "9bdb6a70-6e38-55c6-bc4f-e72a2a57c7e1", 00:22:15.155 "is_configured": true, 00:22:15.155 "data_offset": 0, 00:22:15.155 "data_size": 65536 00:22:15.155 } 00:22:15.155 ] 00:22:15.155 }' 00:22:15.155 11:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:15.155 11:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:15.758 11:33:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:16.015 [2024-07-25 11:33:31.810493] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:16.015 [2024-07-25 11:33:31.810551] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.273 00:22:16.273 Latency(us) 00:22:16.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.273 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:16.273 raid_bdev1 : 12.45 91.79 275.38 0.00 0.00 14376.07 310.92 117726.49 00:22:16.273 =================================================================================================================== 00:22:16.273 Total : 91.79 275.38 0.00 0.00 14376.07 310.92 117726.49 00:22:16.273 0 00:22:16.273 [2024-07-25 11:33:31.919838] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.273 [2024-07-25 11:33:31.919895] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.273 [2024-07-25 11:33:31.919990] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.273 [2024-07-25 11:33:31.920010] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:16.273 11:33:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.273 11:33:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:22:16.530 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:22:16.530 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:22:16.530 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:22:16.530 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:16.530 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.530 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:16.530 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:16.530 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:16.530 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:16.530 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:16.531 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:16.531 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.531 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:16.789 /dev/nbd0 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:16.789 1+0 records in 00:22:16.789 1+0 records out 00:22:16.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263111 s, 15.6 MB/s 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.789 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:17.048 /dev/nbd1 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:17.048 1+0 records in 00:22:17.048 1+0 records out 00:22:17.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493497 s, 8.3 MB/s 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.048 11:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:17.306 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:17.306 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.306 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:17.306 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:17.306 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:17.306 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.306 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.564 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 87192 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87192 ']' 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87192 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87192 00:22:17.823 killing process with pid 87192 00:22:17.823 Received shutdown signal, test time was about 14.224632 seconds 00:22:17.823 00:22:17.823 Latency(us) 00:22:17.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.823 =================================================================================================================== 00:22:17.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87192' 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87192 00:22:17.823 [2024-07-25 11:33:33.674925] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:17.823 11:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87192 00:22:18.082 [2024-07-25 11:33:33.884329] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:22:19.458 00:22:19.458 real 0m20.352s 00:22:19.458 user 0m31.080s 00:22:19.458 sys 0m2.366s 00:22:19.458 ************************************ 00:22:19.458 END TEST raid_rebuild_test_io 00:22:19.458 ************************************ 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:22:19.458 11:33:35 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:22:19.458 11:33:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:19.458 11:33:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:19.458 11:33:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:19.458 ************************************ 00:22:19.458 START TEST raid_rebuild_test_sb_io 00:22:19.458 ************************************ 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:22:19.458 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:22:19.459 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:22:19.459 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=87655 00:22:19.459 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:19.459 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 87655 /var/tmp/spdk-raid.sock 00:22:19.459 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87655 ']' 00:22:19.459 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:19.459 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.459 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:19.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:19.459 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.459 11:33:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:19.459 [2024-07-25 11:33:35.329726] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:19.459 [2024-07-25 11:33:35.330142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:22:19.459 Zero copy mechanism will not be used. 00:22:19.459 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87655 ] 00:22:19.716 [2024-07-25 11:33:35.519989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.281 [2024-07-25 11:33:35.884737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.281 [2024-07-25 11:33:36.102762] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.281 [2024-07-25 11:33:36.102814] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.539 11:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.539 11:33:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:22:20.539 11:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:20.539 11:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:20.797 BaseBdev1_malloc 00:22:20.797 11:33:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:21.363 [2024-07-25 11:33:36.996679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:21.363 [2024-07-25 11:33:36.996774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.363 [2024-07-25 11:33:36.996829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:21.363 [2024-07-25 11:33:36.996847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.363 [2024-07-25 11:33:36.999830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.363 [2024-07-25 11:33:36.999872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:21.363 BaseBdev1 00:22:21.363 11:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:21.363 11:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:21.622 BaseBdev2_malloc 00:22:21.622 11:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:21.879 [2024-07-25 11:33:37.621674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:21.879 [2024-07-25 11:33:37.621844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.879 [2024-07-25 11:33:37.621889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:21.880 [2024-07-25 11:33:37.621907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.880 [2024-07-25 11:33:37.624884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.880 [2024-07-25 11:33:37.624960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:21.880 BaseBdev2 00:22:21.880 11:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:22.138 spare_malloc 00:22:22.138 11:33:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:22.395 spare_delay 00:22:22.395 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:22.653 [2024-07-25 11:33:38.407969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:22.653 [2024-07-25 11:33:38.408077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.653 [2024-07-25 11:33:38.408119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:22.653 [2024-07-25 11:33:38.408135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.653 [2024-07-25 11:33:38.410947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.653 [2024-07-25 11:33:38.410990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:22.653 spare 00:22:22.654 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:22.913 [2024-07-25 11:33:38.672134] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.913 [2024-07-25 11:33:38.674712] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:22.913 [2024-07-25 11:33:38.674970] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:22.913 [2024-07-25 11:33:38.674994] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:22.913 [2024-07-25 11:33:38.675424] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:22.913 [2024-07-25 11:33:38.675645] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:22.913 [2024-07-25 11:33:38.675700] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:22.913 [2024-07-25 11:33:38.675966] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:22.913 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.914 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.171 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.171 "name": "raid_bdev1", 00:22:23.171 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:23.171 "strip_size_kb": 0, 00:22:23.171 "state": "online", 00:22:23.171 "raid_level": "raid1", 00:22:23.171 "superblock": true, 00:22:23.171 "num_base_bdevs": 2, 00:22:23.171 "num_base_bdevs_discovered": 2, 00:22:23.171 "num_base_bdevs_operational": 2, 00:22:23.172 "base_bdevs_list": [ 00:22:23.172 { 00:22:23.172 "name": "BaseBdev1", 00:22:23.172 "uuid": "fc8b99b5-a4b6-597a-8997-7124495befb1", 00:22:23.172 "is_configured": true, 00:22:23.172 "data_offset": 2048, 00:22:23.172 "data_size": 63488 00:22:23.172 }, 00:22:23.172 { 00:22:23.172 "name": "BaseBdev2", 00:22:23.172 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:23.172 "is_configured": true, 00:22:23.172 "data_offset": 2048, 00:22:23.172 "data_size": 63488 00:22:23.172 } 00:22:23.172 ] 00:22:23.172 }' 00:22:23.172 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.172 11:33:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:24.106 11:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:24.106 11:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:22:24.106 [2024-07-25 11:33:39.940911] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:24.106 11:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:22:24.106 11:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.106 11:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:24.672 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:22:24.672 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:22:24.672 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:24.672 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:24.672 [2024-07-25 11:33:40.388612] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:24.672 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:24.672 Zero copy mechanism will not be used. 00:22:24.672 Running I/O for 60 seconds... 00:22:24.672 [2024-07-25 11:33:40.501359] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:24.672 [2024-07-25 11:33:40.515972] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:22:24.672 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:24.672 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:24.672 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:24.672 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:24.672 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:24.672 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:24.673 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.673 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.673 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.673 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.673 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.673 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.930 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.930 "name": "raid_bdev1", 00:22:24.930 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:24.930 "strip_size_kb": 0, 00:22:24.930 "state": "online", 00:22:24.930 "raid_level": "raid1", 00:22:24.930 "superblock": true, 00:22:24.930 "num_base_bdevs": 2, 00:22:24.930 "num_base_bdevs_discovered": 1, 00:22:24.930 "num_base_bdevs_operational": 1, 00:22:24.930 "base_bdevs_list": [ 00:22:24.930 { 00:22:24.930 "name": null, 00:22:24.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.930 "is_configured": false, 00:22:24.930 "data_offset": 2048, 00:22:24.930 "data_size": 63488 00:22:24.930 }, 00:22:24.930 { 00:22:24.930 "name": "BaseBdev2", 00:22:24.930 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:24.930 "is_configured": true, 00:22:24.930 "data_offset": 2048, 00:22:24.930 "data_size": 63488 00:22:24.930 } 00:22:24.930 ] 00:22:24.930 }' 00:22:24.930 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.930 11:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:25.865 11:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:26.123 [2024-07-25 11:33:41.800304] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:26.123 11:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:26.123 [2024-07-25 11:33:41.860265] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:26.123 [2024-07-25 11:33:41.862722] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:26.123 [2024-07-25 11:33:41.995042] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:26.123 [2024-07-25 11:33:41.995722] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:26.382 [2024-07-25 11:33:42.207437] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:26.382 [2024-07-25 11:33:42.207836] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:26.949 [2024-07-25 11:33:42.550647] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:26.949 [2024-07-25 11:33:42.551574] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:26.949 [2024-07-25 11:33:42.754652] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:26.949 [2024-07-25 11:33:42.755255] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:27.207 11:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.207 11:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:27.207 11:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:27.207 11:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:27.207 11:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:27.207 11:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.207 11:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.465 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:27.465 "name": "raid_bdev1", 00:22:27.465 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:27.465 "strip_size_kb": 0, 00:22:27.465 "state": "online", 00:22:27.465 "raid_level": "raid1", 00:22:27.465 "superblock": true, 00:22:27.465 "num_base_bdevs": 2, 00:22:27.465 "num_base_bdevs_discovered": 2, 00:22:27.465 "num_base_bdevs_operational": 2, 00:22:27.465 "process": { 00:22:27.465 "type": "rebuild", 00:22:27.465 "target": "spare", 00:22:27.465 "progress": { 00:22:27.465 "blocks": 14336, 00:22:27.465 "percent": 22 00:22:27.465 } 00:22:27.465 }, 00:22:27.465 "base_bdevs_list": [ 00:22:27.465 { 00:22:27.465 "name": "spare", 00:22:27.465 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:27.465 "is_configured": true, 00:22:27.465 "data_offset": 2048, 00:22:27.465 "data_size": 63488 00:22:27.465 }, 00:22:27.465 { 00:22:27.465 "name": "BaseBdev2", 00:22:27.465 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:27.465 "is_configured": true, 00:22:27.465 "data_offset": 2048, 00:22:27.465 "data_size": 63488 00:22:27.465 } 00:22:27.465 ] 00:22:27.465 }' 00:22:27.465 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:27.465 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.465 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:27.465 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.465 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:27.722 [2024-07-25 11:33:43.444286] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:27.722 [2024-07-25 11:33:43.524349] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:27.722 [2024-07-25 11:33:43.575134] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:27.722 [2024-07-25 11:33:43.583351] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.722 [2024-07-25 11:33:43.583392] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:27.722 [2024-07-25 11:33:43.583406] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:27.980 [2024-07-25 11:33:43.612140] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.980 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.238 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.238 "name": "raid_bdev1", 00:22:28.238 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:28.238 "strip_size_kb": 0, 00:22:28.238 "state": "online", 00:22:28.238 "raid_level": "raid1", 00:22:28.238 "superblock": true, 00:22:28.238 "num_base_bdevs": 2, 00:22:28.238 "num_base_bdevs_discovered": 1, 00:22:28.238 "num_base_bdevs_operational": 1, 00:22:28.238 "base_bdevs_list": [ 00:22:28.238 { 00:22:28.238 "name": null, 00:22:28.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.238 "is_configured": false, 00:22:28.238 "data_offset": 2048, 00:22:28.238 "data_size": 63488 00:22:28.238 }, 00:22:28.238 { 00:22:28.238 "name": "BaseBdev2", 00:22:28.238 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:28.238 "is_configured": true, 00:22:28.238 "data_offset": 2048, 00:22:28.238 "data_size": 63488 00:22:28.238 } 00:22:28.238 ] 00:22:28.238 }' 00:22:28.238 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.238 11:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:28.804 11:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:28.804 11:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:28.804 11:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:28.804 11:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:28.804 11:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:28.804 11:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.804 11:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.453 11:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:29.453 "name": "raid_bdev1", 00:22:29.453 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:29.453 "strip_size_kb": 0, 00:22:29.453 "state": "online", 00:22:29.453 "raid_level": "raid1", 00:22:29.453 "superblock": true, 00:22:29.453 "num_base_bdevs": 2, 00:22:29.453 "num_base_bdevs_discovered": 1, 00:22:29.453 "num_base_bdevs_operational": 1, 00:22:29.453 "base_bdevs_list": [ 00:22:29.453 { 00:22:29.453 "name": null, 00:22:29.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.453 "is_configured": false, 00:22:29.453 "data_offset": 2048, 00:22:29.453 "data_size": 63488 00:22:29.453 }, 00:22:29.453 { 00:22:29.453 "name": "BaseBdev2", 00:22:29.453 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:29.453 "is_configured": true, 00:22:29.453 "data_offset": 2048, 00:22:29.453 "data_size": 63488 00:22:29.453 } 00:22:29.453 ] 00:22:29.453 }' 00:22:29.453 11:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:29.453 11:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:29.453 11:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:29.453 11:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:29.453 11:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:29.710 [2024-07-25 11:33:45.428409] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:29.710 [2024-07-25 11:33:45.502348] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:29.710 [2024-07-25 11:33:45.505137] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:29.710 11:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:22:29.966 [2024-07-25 11:33:45.607685] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:29.966 [2024-07-25 11:33:45.608108] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:29.966 [2024-07-25 11:33:45.829543] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:29.966 [2024-07-25 11:33:45.829935] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:30.531 [2024-07-25 11:33:46.229021] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:30.531 [2024-07-25 11:33:46.229697] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:30.788 [2024-07-25 11:33:46.458533] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:30.788 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.788 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:30.788 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:30.788 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:30.788 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:30.789 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.789 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:31.047 "name": "raid_bdev1", 00:22:31.047 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:31.047 "strip_size_kb": 0, 00:22:31.047 "state": "online", 00:22:31.047 "raid_level": "raid1", 00:22:31.047 "superblock": true, 00:22:31.047 "num_base_bdevs": 2, 00:22:31.047 "num_base_bdevs_discovered": 2, 00:22:31.047 "num_base_bdevs_operational": 2, 00:22:31.047 "process": { 00:22:31.047 "type": "rebuild", 00:22:31.047 "target": "spare", 00:22:31.047 "progress": { 00:22:31.047 "blocks": 14336, 00:22:31.047 "percent": 22 00:22:31.047 } 00:22:31.047 }, 00:22:31.047 "base_bdevs_list": [ 00:22:31.047 { 00:22:31.047 "name": "spare", 00:22:31.047 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:31.047 "is_configured": true, 00:22:31.047 "data_offset": 2048, 00:22:31.047 "data_size": 63488 00:22:31.047 }, 00:22:31.047 { 00:22:31.047 "name": "BaseBdev2", 00:22:31.047 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:31.047 "is_configured": true, 00:22:31.047 "data_offset": 2048, 00:22:31.047 "data_size": 63488 00:22:31.047 } 00:22:31.047 ] 00:22:31.047 }' 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:31.047 [2024-07-25 11:33:46.821894] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:22:31.047 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=990 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.047 11:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.306 [2024-07-25 11:33:47.041528] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:31.306 11:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:31.306 "name": "raid_bdev1", 00:22:31.306 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:31.306 "strip_size_kb": 0, 00:22:31.306 "state": "online", 00:22:31.306 "raid_level": "raid1", 00:22:31.306 "superblock": true, 00:22:31.306 "num_base_bdevs": 2, 00:22:31.306 "num_base_bdevs_discovered": 2, 00:22:31.306 "num_base_bdevs_operational": 2, 00:22:31.306 "process": { 00:22:31.306 "type": "rebuild", 00:22:31.306 "target": "spare", 00:22:31.306 "progress": { 00:22:31.306 "blocks": 20480, 00:22:31.306 "percent": 32 00:22:31.306 } 00:22:31.306 }, 00:22:31.306 "base_bdevs_list": [ 00:22:31.306 { 00:22:31.306 "name": "spare", 00:22:31.306 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:31.306 "is_configured": true, 00:22:31.306 "data_offset": 2048, 00:22:31.306 "data_size": 63488 00:22:31.306 }, 00:22:31.306 { 00:22:31.306 "name": "BaseBdev2", 00:22:31.306 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:31.306 "is_configured": true, 00:22:31.306 "data_offset": 2048, 00:22:31.306 "data_size": 63488 00:22:31.306 } 00:22:31.306 ] 00:22:31.306 }' 00:22:31.306 11:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:31.306 11:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.306 11:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:31.564 11:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.564 11:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:31.822 [2024-07-25 11:33:47.479724] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:31.822 [2024-07-25 11:33:47.480436] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:32.081 [2024-07-25 11:33:47.717273] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:32.339 [2024-07-25 11:33:48.137757] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:32.597 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:32.597 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.597 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:32.597 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:32.597 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:32.597 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:32.597 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.597 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.597 [2024-07-25 11:33:48.358801] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:32.857 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:32.857 "name": "raid_bdev1", 00:22:32.857 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:32.857 "strip_size_kb": 0, 00:22:32.857 "state": "online", 00:22:32.857 "raid_level": "raid1", 00:22:32.857 "superblock": true, 00:22:32.857 "num_base_bdevs": 2, 00:22:32.857 "num_base_bdevs_discovered": 2, 00:22:32.857 "num_base_bdevs_operational": 2, 00:22:32.857 "process": { 00:22:32.857 "type": "rebuild", 00:22:32.857 "target": "spare", 00:22:32.857 "progress": { 00:22:32.857 "blocks": 40960, 00:22:32.857 "percent": 64 00:22:32.857 } 00:22:32.857 }, 00:22:32.857 "base_bdevs_list": [ 00:22:32.857 { 00:22:32.857 "name": "spare", 00:22:32.857 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:32.857 "is_configured": true, 00:22:32.857 "data_offset": 2048, 00:22:32.857 "data_size": 63488 00:22:32.857 }, 00:22:32.857 { 00:22:32.857 "name": "BaseBdev2", 00:22:32.857 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:32.857 "is_configured": true, 00:22:32.857 "data_offset": 2048, 00:22:32.857 "data_size": 63488 00:22:32.857 } 00:22:32.857 ] 00:22:32.857 }' 00:22:32.857 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:32.857 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.857 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:32.857 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.857 11:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:33.115 [2024-07-25 11:33:48.801211] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:33.373 [2024-07-25 11:33:49.004903] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:33.373 [2024-07-25 11:33:49.214111] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:33.940 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:33.940 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.940 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:33.940 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:33.940 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:33.940 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:33.940 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.940 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.199 [2024-07-25 11:33:49.869222] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:34.199 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:34.199 "name": "raid_bdev1", 00:22:34.199 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:34.199 "strip_size_kb": 0, 00:22:34.199 "state": "online", 00:22:34.199 "raid_level": "raid1", 00:22:34.199 "superblock": true, 00:22:34.199 "num_base_bdevs": 2, 00:22:34.199 "num_base_bdevs_discovered": 2, 00:22:34.199 "num_base_bdevs_operational": 2, 00:22:34.199 "process": { 00:22:34.199 "type": "rebuild", 00:22:34.199 "target": "spare", 00:22:34.199 "progress": { 00:22:34.199 "blocks": 61440, 00:22:34.199 "percent": 96 00:22:34.199 } 00:22:34.199 }, 00:22:34.199 "base_bdevs_list": [ 00:22:34.199 { 00:22:34.199 "name": "spare", 00:22:34.199 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:34.199 "is_configured": true, 00:22:34.199 "data_offset": 2048, 00:22:34.199 "data_size": 63488 00:22:34.199 }, 00:22:34.199 { 00:22:34.199 "name": "BaseBdev2", 00:22:34.199 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:34.199 "is_configured": true, 00:22:34.199 "data_offset": 2048, 00:22:34.199 "data_size": 63488 00:22:34.199 } 00:22:34.199 ] 00:22:34.199 }' 00:22:34.199 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:34.199 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:34.199 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:34.199 [2024-07-25 11:33:49.976421] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:34.199 [2024-07-25 11:33:49.978612] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.199 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:34.199 11:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:35.207 11:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:35.207 11:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:35.207 11:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:35.207 11:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:35.207 11:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:35.207 11:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:35.207 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.207 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.466 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:35.466 "name": "raid_bdev1", 00:22:35.466 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:35.466 "strip_size_kb": 0, 00:22:35.466 "state": "online", 00:22:35.466 "raid_level": "raid1", 00:22:35.466 "superblock": true, 00:22:35.466 "num_base_bdevs": 2, 00:22:35.466 "num_base_bdevs_discovered": 2, 00:22:35.466 "num_base_bdevs_operational": 2, 00:22:35.466 "base_bdevs_list": [ 00:22:35.466 { 00:22:35.466 "name": "spare", 00:22:35.466 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:35.466 "is_configured": true, 00:22:35.466 "data_offset": 2048, 00:22:35.466 "data_size": 63488 00:22:35.466 }, 00:22:35.466 { 00:22:35.466 "name": "BaseBdev2", 00:22:35.466 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:35.466 "is_configured": true, 00:22:35.466 "data_offset": 2048, 00:22:35.466 "data_size": 63488 00:22:35.466 } 00:22:35.466 ] 00:22:35.466 }' 00:22:35.466 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:35.466 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:35.466 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:35.724 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:35.724 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:22:35.724 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:35.724 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:35.724 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:35.724 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:35.724 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:35.724 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.724 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:35.983 "name": "raid_bdev1", 00:22:35.983 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:35.983 "strip_size_kb": 0, 00:22:35.983 "state": "online", 00:22:35.983 "raid_level": "raid1", 00:22:35.983 "superblock": true, 00:22:35.983 "num_base_bdevs": 2, 00:22:35.983 "num_base_bdevs_discovered": 2, 00:22:35.983 "num_base_bdevs_operational": 2, 00:22:35.983 "base_bdevs_list": [ 00:22:35.983 { 00:22:35.983 "name": "spare", 00:22:35.983 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:35.983 "is_configured": true, 00:22:35.983 "data_offset": 2048, 00:22:35.983 "data_size": 63488 00:22:35.983 }, 00:22:35.983 { 00:22:35.983 "name": "BaseBdev2", 00:22:35.983 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:35.983 "is_configured": true, 00:22:35.983 "data_offset": 2048, 00:22:35.983 "data_size": 63488 00:22:35.983 } 00:22:35.983 ] 00:22:35.983 }' 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:35.983 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:35.984 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:35.984 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:35.984 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:35.984 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:35.984 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.984 11:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.243 11:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.243 "name": "raid_bdev1", 00:22:36.243 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:36.243 "strip_size_kb": 0, 00:22:36.243 "state": "online", 00:22:36.243 "raid_level": "raid1", 00:22:36.243 "superblock": true, 00:22:36.243 "num_base_bdevs": 2, 00:22:36.243 "num_base_bdevs_discovered": 2, 00:22:36.243 "num_base_bdevs_operational": 2, 00:22:36.243 "base_bdevs_list": [ 00:22:36.243 { 00:22:36.243 "name": "spare", 00:22:36.243 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:36.243 "is_configured": true, 00:22:36.243 "data_offset": 2048, 00:22:36.243 "data_size": 63488 00:22:36.243 }, 00:22:36.243 { 00:22:36.243 "name": "BaseBdev2", 00:22:36.243 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:36.243 "is_configured": true, 00:22:36.243 "data_offset": 2048, 00:22:36.243 "data_size": 63488 00:22:36.243 } 00:22:36.243 ] 00:22:36.243 }' 00:22:36.243 11:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.243 11:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:37.199 11:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:37.199 [2024-07-25 11:33:53.031517] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.199 [2024-07-25 11:33:53.031561] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.199 00:22:37.199 Latency(us) 00:22:37.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.199 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:37.199 raid_bdev1 : 12.67 103.72 311.17 0.00 0.00 13332.14 271.83 119632.99 00:22:37.199 =================================================================================================================== 00:22:37.199 Total : 103.72 311.17 0.00 0.00 13332.14 271.83 119632.99 00:22:37.199 0 00:22:37.199 [2024-07-25 11:33:53.079183] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.199 [2024-07-25 11:33:53.079240] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.199 [2024-07-25 11:33:53.079349] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.199 [2024-07-25 11:33:53.079367] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:37.458 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.458 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:37.716 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:37.716 /dev/nbd0 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:37.975 1+0 records in 00:22:37.975 1+0 records out 00:22:37.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330885 s, 12.4 MB/s 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:37.975 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:38.233 /dev/nbd1 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:38.233 1+0 records in 00:22:38.233 1+0 records out 00:22:38.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225858 s, 18.1 MB/s 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:38.233 11:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:38.233 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:38.233 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:38.233 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:38.233 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:38.233 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:22:38.233 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:38.233 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:38.492 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:38.750 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:38.750 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:38.750 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:38.750 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:38.750 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:38.750 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:38.750 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:22:38.750 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:22:38.750 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:22:38.750 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:39.008 11:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:39.267 [2024-07-25 11:33:55.040811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:39.267 [2024-07-25 11:33:55.040894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.267 [2024-07-25 11:33:55.040934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:39.267 [2024-07-25 11:33:55.040949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.267 [2024-07-25 11:33:55.043798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.267 [2024-07-25 11:33:55.043842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:39.267 [2024-07-25 11:33:55.043966] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:39.267 [2024-07-25 11:33:55.044064] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:39.267 [2024-07-25 11:33:55.044259] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:39.267 spare 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.267 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.267 [2024-07-25 11:33:55.144378] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:39.267 [2024-07-25 11:33:55.144438] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:39.267 [2024-07-25 11:33:55.144847] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:22:39.267 [2024-07-25 11:33:55.145078] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:39.267 [2024-07-25 11:33:55.145094] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:39.267 [2024-07-25 11:33:55.145297] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.525 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.525 "name": "raid_bdev1", 00:22:39.525 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:39.525 "strip_size_kb": 0, 00:22:39.525 "state": "online", 00:22:39.525 "raid_level": "raid1", 00:22:39.525 "superblock": true, 00:22:39.525 "num_base_bdevs": 2, 00:22:39.525 "num_base_bdevs_discovered": 2, 00:22:39.525 "num_base_bdevs_operational": 2, 00:22:39.525 "base_bdevs_list": [ 00:22:39.525 { 00:22:39.525 "name": "spare", 00:22:39.525 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:39.525 "is_configured": true, 00:22:39.525 "data_offset": 2048, 00:22:39.525 "data_size": 63488 00:22:39.525 }, 00:22:39.525 { 00:22:39.525 "name": "BaseBdev2", 00:22:39.525 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:39.525 "is_configured": true, 00:22:39.525 "data_offset": 2048, 00:22:39.525 "data_size": 63488 00:22:39.525 } 00:22:39.525 ] 00:22:39.525 }' 00:22:39.525 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.525 11:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:40.456 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:40.456 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:40.456 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:40.456 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:40.456 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:40.456 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.456 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.714 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:40.714 "name": "raid_bdev1", 00:22:40.714 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:40.714 "strip_size_kb": 0, 00:22:40.714 "state": "online", 00:22:40.714 "raid_level": "raid1", 00:22:40.714 "superblock": true, 00:22:40.714 "num_base_bdevs": 2, 00:22:40.714 "num_base_bdevs_discovered": 2, 00:22:40.714 "num_base_bdevs_operational": 2, 00:22:40.714 "base_bdevs_list": [ 00:22:40.714 { 00:22:40.714 "name": "spare", 00:22:40.714 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:40.714 "is_configured": true, 00:22:40.714 "data_offset": 2048, 00:22:40.714 "data_size": 63488 00:22:40.714 }, 00:22:40.714 { 00:22:40.714 "name": "BaseBdev2", 00:22:40.714 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:40.714 "is_configured": true, 00:22:40.714 "data_offset": 2048, 00:22:40.714 "data_size": 63488 00:22:40.714 } 00:22:40.714 ] 00:22:40.714 }' 00:22:40.714 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:40.714 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:40.714 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:40.714 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:40.714 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:40.714 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.971 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.971 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:41.229 [2024-07-25 11:33:56.954233] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.229 11:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.488 11:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.488 "name": "raid_bdev1", 00:22:41.488 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:41.488 "strip_size_kb": 0, 00:22:41.488 "state": "online", 00:22:41.488 "raid_level": "raid1", 00:22:41.488 "superblock": true, 00:22:41.488 "num_base_bdevs": 2, 00:22:41.488 "num_base_bdevs_discovered": 1, 00:22:41.488 "num_base_bdevs_operational": 1, 00:22:41.488 "base_bdevs_list": [ 00:22:41.488 { 00:22:41.488 "name": null, 00:22:41.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.488 "is_configured": false, 00:22:41.488 "data_offset": 2048, 00:22:41.488 "data_size": 63488 00:22:41.488 }, 00:22:41.488 { 00:22:41.488 "name": "BaseBdev2", 00:22:41.488 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:41.488 "is_configured": true, 00:22:41.488 "data_offset": 2048, 00:22:41.488 "data_size": 63488 00:22:41.488 } 00:22:41.488 ] 00:22:41.488 }' 00:22:41.488 11:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.488 11:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:42.058 11:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:42.316 [2024-07-25 11:33:58.186878] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:42.316 [2024-07-25 11:33:58.187140] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:42.316 [2024-07-25 11:33:58.187165] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:42.316 [2024-07-25 11:33:58.187215] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:42.574 [2024-07-25 11:33:58.202816] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:22:42.574 [2024-07-25 11:33:58.205338] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:42.574 11:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:22:43.509 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.509 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:43.509 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:43.509 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:43.509 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:43.509 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.509 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.766 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:43.766 "name": "raid_bdev1", 00:22:43.766 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:43.766 "strip_size_kb": 0, 00:22:43.766 "state": "online", 00:22:43.766 "raid_level": "raid1", 00:22:43.766 "superblock": true, 00:22:43.766 "num_base_bdevs": 2, 00:22:43.766 "num_base_bdevs_discovered": 2, 00:22:43.766 "num_base_bdevs_operational": 2, 00:22:43.766 "process": { 00:22:43.766 "type": "rebuild", 00:22:43.766 "target": "spare", 00:22:43.766 "progress": { 00:22:43.766 "blocks": 24576, 00:22:43.766 "percent": 38 00:22:43.766 } 00:22:43.766 }, 00:22:43.766 "base_bdevs_list": [ 00:22:43.766 { 00:22:43.766 "name": "spare", 00:22:43.767 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:43.767 "is_configured": true, 00:22:43.767 "data_offset": 2048, 00:22:43.767 "data_size": 63488 00:22:43.767 }, 00:22:43.767 { 00:22:43.767 "name": "BaseBdev2", 00:22:43.767 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:43.767 "is_configured": true, 00:22:43.767 "data_offset": 2048, 00:22:43.767 "data_size": 63488 00:22:43.767 } 00:22:43.767 ] 00:22:43.767 }' 00:22:43.767 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:43.767 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.767 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:43.767 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.767 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:44.024 [2024-07-25 11:33:59.831399] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:44.282 [2024-07-25 11:33:59.918040] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:44.282 [2024-07-25 11:33:59.918148] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.282 [2024-07-25 11:33:59.918174] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:44.282 [2024-07-25 11:33:59.918197] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.282 11:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.540 11:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.540 "name": "raid_bdev1", 00:22:44.540 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:44.540 "strip_size_kb": 0, 00:22:44.540 "state": "online", 00:22:44.540 "raid_level": "raid1", 00:22:44.540 "superblock": true, 00:22:44.540 "num_base_bdevs": 2, 00:22:44.540 "num_base_bdevs_discovered": 1, 00:22:44.540 "num_base_bdevs_operational": 1, 00:22:44.540 "base_bdevs_list": [ 00:22:44.540 { 00:22:44.540 "name": null, 00:22:44.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.540 "is_configured": false, 00:22:44.540 "data_offset": 2048, 00:22:44.540 "data_size": 63488 00:22:44.540 }, 00:22:44.540 { 00:22:44.540 "name": "BaseBdev2", 00:22:44.540 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:44.540 "is_configured": true, 00:22:44.540 "data_offset": 2048, 00:22:44.540 "data_size": 63488 00:22:44.540 } 00:22:44.540 ] 00:22:44.540 }' 00:22:44.540 11:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.540 11:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:45.107 11:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:45.365 [2024-07-25 11:34:01.103995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:45.365 [2024-07-25 11:34:01.104115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.365 [2024-07-25 11:34:01.104149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:45.365 [2024-07-25 11:34:01.104169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.365 [2024-07-25 11:34:01.104809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.365 [2024-07-25 11:34:01.104841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:45.365 [2024-07-25 11:34:01.104963] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:45.365 [2024-07-25 11:34:01.104987] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:45.365 [2024-07-25 11:34:01.105009] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:45.365 [2024-07-25 11:34:01.105047] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:45.365 [2024-07-25 11:34:01.120232] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:22:45.365 spare 00:22:45.365 [2024-07-25 11:34:01.126675] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:45.365 11:34:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:22:46.316 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.316 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:46.316 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:46.316 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:46.316 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:46.316 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.316 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.574 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:46.574 "name": "raid_bdev1", 00:22:46.574 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:46.574 "strip_size_kb": 0, 00:22:46.574 "state": "online", 00:22:46.574 "raid_level": "raid1", 00:22:46.574 "superblock": true, 00:22:46.574 "num_base_bdevs": 2, 00:22:46.574 "num_base_bdevs_discovered": 2, 00:22:46.574 "num_base_bdevs_operational": 2, 00:22:46.574 "process": { 00:22:46.574 "type": "rebuild", 00:22:46.574 "target": "spare", 00:22:46.574 "progress": { 00:22:46.574 "blocks": 24576, 00:22:46.574 "percent": 38 00:22:46.574 } 00:22:46.574 }, 00:22:46.574 "base_bdevs_list": [ 00:22:46.574 { 00:22:46.574 "name": "spare", 00:22:46.574 "uuid": "c18b34b0-3b9a-506c-912f-42e2b797ed73", 00:22:46.574 "is_configured": true, 00:22:46.574 "data_offset": 2048, 00:22:46.574 "data_size": 63488 00:22:46.574 }, 00:22:46.574 { 00:22:46.574 "name": "BaseBdev2", 00:22:46.574 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:46.574 "is_configured": true, 00:22:46.574 "data_offset": 2048, 00:22:46.574 "data_size": 63488 00:22:46.574 } 00:22:46.574 ] 00:22:46.574 }' 00:22:46.574 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:46.832 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:46.832 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:46.832 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:46.832 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:47.090 [2024-07-25 11:34:02.814689] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:47.090 [2024-07-25 11:34:02.839246] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:47.090 [2024-07-25 11:34:02.839329] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.090 [2024-07-25 11:34:02.839363] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:47.090 [2024-07-25 11:34:02.839377] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.090 11:34:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.348 11:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.348 "name": "raid_bdev1", 00:22:47.348 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:47.348 "strip_size_kb": 0, 00:22:47.348 "state": "online", 00:22:47.348 "raid_level": "raid1", 00:22:47.348 "superblock": true, 00:22:47.348 "num_base_bdevs": 2, 00:22:47.348 "num_base_bdevs_discovered": 1, 00:22:47.348 "num_base_bdevs_operational": 1, 00:22:47.348 "base_bdevs_list": [ 00:22:47.348 { 00:22:47.348 "name": null, 00:22:47.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.348 "is_configured": false, 00:22:47.348 "data_offset": 2048, 00:22:47.348 "data_size": 63488 00:22:47.348 }, 00:22:47.348 { 00:22:47.348 "name": "BaseBdev2", 00:22:47.348 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:47.348 "is_configured": true, 00:22:47.348 "data_offset": 2048, 00:22:47.348 "data_size": 63488 00:22:47.348 } 00:22:47.348 ] 00:22:47.348 }' 00:22:47.348 11:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.348 11:34:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:48.282 11:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:48.282 11:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:48.282 11:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:48.282 11:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:48.282 11:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:48.282 11:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.282 11:34:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.282 11:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:48.282 "name": "raid_bdev1", 00:22:48.282 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:48.282 "strip_size_kb": 0, 00:22:48.282 "state": "online", 00:22:48.282 "raid_level": "raid1", 00:22:48.282 "superblock": true, 00:22:48.282 "num_base_bdevs": 2, 00:22:48.282 "num_base_bdevs_discovered": 1, 00:22:48.282 "num_base_bdevs_operational": 1, 00:22:48.282 "base_bdevs_list": [ 00:22:48.282 { 00:22:48.282 "name": null, 00:22:48.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.282 "is_configured": false, 00:22:48.282 "data_offset": 2048, 00:22:48.282 "data_size": 63488 00:22:48.282 }, 00:22:48.282 { 00:22:48.282 "name": "BaseBdev2", 00:22:48.282 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:48.282 "is_configured": true, 00:22:48.282 "data_offset": 2048, 00:22:48.282 "data_size": 63488 00:22:48.282 } 00:22:48.282 ] 00:22:48.283 }' 00:22:48.283 11:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:48.540 11:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:48.541 11:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:48.541 11:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:48.541 11:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:48.798 11:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:49.056 [2024-07-25 11:34:04.813326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:49.056 [2024-07-25 11:34:04.813639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.056 [2024-07-25 11:34:04.813700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:49.056 [2024-07-25 11:34:04.813716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.056 [2024-07-25 11:34:04.814279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.056 [2024-07-25 11:34:04.814305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:49.056 [2024-07-25 11:34:04.814416] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:49.056 [2024-07-25 11:34:04.814437] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:49.056 [2024-07-25 11:34:04.814454] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:49.056 BaseBdev1 00:22:49.056 11:34:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.990 11:34:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.567 11:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.567 "name": "raid_bdev1", 00:22:50.567 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:50.567 "strip_size_kb": 0, 00:22:50.567 "state": "online", 00:22:50.567 "raid_level": "raid1", 00:22:50.567 "superblock": true, 00:22:50.567 "num_base_bdevs": 2, 00:22:50.567 "num_base_bdevs_discovered": 1, 00:22:50.567 "num_base_bdevs_operational": 1, 00:22:50.567 "base_bdevs_list": [ 00:22:50.567 { 00:22:50.567 "name": null, 00:22:50.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.567 "is_configured": false, 00:22:50.567 "data_offset": 2048, 00:22:50.567 "data_size": 63488 00:22:50.567 }, 00:22:50.567 { 00:22:50.567 "name": "BaseBdev2", 00:22:50.567 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:50.567 "is_configured": true, 00:22:50.567 "data_offset": 2048, 00:22:50.567 "data_size": 63488 00:22:50.567 } 00:22:50.567 ] 00:22:50.567 }' 00:22:50.567 11:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.567 11:34:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:51.134 11:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:51.134 11:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:51.134 11:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:51.134 11:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:51.134 11:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:51.134 11:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.134 11:34:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:51.392 "name": "raid_bdev1", 00:22:51.392 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:51.392 "strip_size_kb": 0, 00:22:51.392 "state": "online", 00:22:51.392 "raid_level": "raid1", 00:22:51.392 "superblock": true, 00:22:51.392 "num_base_bdevs": 2, 00:22:51.392 "num_base_bdevs_discovered": 1, 00:22:51.392 "num_base_bdevs_operational": 1, 00:22:51.392 "base_bdevs_list": [ 00:22:51.392 { 00:22:51.392 "name": null, 00:22:51.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.392 "is_configured": false, 00:22:51.392 "data_offset": 2048, 00:22:51.392 "data_size": 63488 00:22:51.392 }, 00:22:51.392 { 00:22:51.392 "name": "BaseBdev2", 00:22:51.392 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:51.392 "is_configured": true, 00:22:51.392 "data_offset": 2048, 00:22:51.392 "data_size": 63488 00:22:51.392 } 00:22:51.392 ] 00:22:51.392 }' 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:51.392 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:51.650 [2024-07-25 11:34:07.478508] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:51.650 [2024-07-25 11:34:07.478772] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:51.650 [2024-07-25 11:34:07.478794] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:51.650 request: 00:22:51.650 { 00:22:51.650 "base_bdev": "BaseBdev1", 00:22:51.650 "raid_bdev": "raid_bdev1", 00:22:51.650 "method": "bdev_raid_add_base_bdev", 00:22:51.650 "req_id": 1 00:22:51.650 } 00:22:51.650 Got JSON-RPC error response 00:22:51.650 response: 00:22:51.650 { 00:22:51.650 "code": -22, 00:22:51.650 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:51.650 } 00:22:51.650 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:22:51.650 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:51.650 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:51.650 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:51.650 11:34:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.026 "name": "raid_bdev1", 00:22:53.026 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:53.026 "strip_size_kb": 0, 00:22:53.026 "state": "online", 00:22:53.026 "raid_level": "raid1", 00:22:53.026 "superblock": true, 00:22:53.026 "num_base_bdevs": 2, 00:22:53.026 "num_base_bdevs_discovered": 1, 00:22:53.026 "num_base_bdevs_operational": 1, 00:22:53.026 "base_bdevs_list": [ 00:22:53.026 { 00:22:53.026 "name": null, 00:22:53.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.026 "is_configured": false, 00:22:53.026 "data_offset": 2048, 00:22:53.026 "data_size": 63488 00:22:53.026 }, 00:22:53.026 { 00:22:53.026 "name": "BaseBdev2", 00:22:53.026 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:53.026 "is_configured": true, 00:22:53.026 "data_offset": 2048, 00:22:53.026 "data_size": 63488 00:22:53.026 } 00:22:53.026 ] 00:22:53.026 }' 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.026 11:34:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:53.961 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:53.961 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:53.961 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:53.961 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:53.961 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:53.961 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.961 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.961 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:53.961 "name": "raid_bdev1", 00:22:53.961 "uuid": "5e8c99c1-eecc-405a-a75f-c22a1cf205da", 00:22:53.961 "strip_size_kb": 0, 00:22:53.961 "state": "online", 00:22:53.961 "raid_level": "raid1", 00:22:53.961 "superblock": true, 00:22:53.961 "num_base_bdevs": 2, 00:22:53.961 "num_base_bdevs_discovered": 1, 00:22:53.961 "num_base_bdevs_operational": 1, 00:22:53.961 "base_bdevs_list": [ 00:22:53.961 { 00:22:53.961 "name": null, 00:22:53.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.961 "is_configured": false, 00:22:53.961 "data_offset": 2048, 00:22:53.961 "data_size": 63488 00:22:53.961 }, 00:22:53.961 { 00:22:53.961 "name": "BaseBdev2", 00:22:53.961 "uuid": "6ae50a80-48e1-5d89-86dc-de80a8fd7b26", 00:22:53.961 "is_configured": true, 00:22:53.961 "data_offset": 2048, 00:22:53.961 "data_size": 63488 00:22:53.961 } 00:22:53.961 ] 00:22:53.961 }' 00:22:53.961 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:54.219 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:54.219 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:54.219 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:54.219 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 87655 00:22:54.219 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87655 ']' 00:22:54.219 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87655 00:22:54.219 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:22:54.219 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.219 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87655 00:22:54.219 killing process with pid 87655 00:22:54.219 Received shutdown signal, test time was about 29.548653 seconds 00:22:54.219 00:22:54.219 Latency(us) 00:22:54.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.220 =================================================================================================================== 00:22:54.220 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.220 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:54.220 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:54.220 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87655' 00:22:54.220 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87655 00:22:54.220 [2024-07-25 11:34:09.939849] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:54.220 11:34:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87655 00:22:54.220 [2024-07-25 11:34:09.940029] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.220 [2024-07-25 11:34:09.940156] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.220 [2024-07-25 11:34:09.940183] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:54.478 [2024-07-25 11:34:10.135667] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:22:55.557 00:22:55.557 real 0m36.140s 00:22:55.557 user 0m57.535s 00:22:55.557 sys 0m3.956s 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:22:55.557 ************************************ 00:22:55.557 END TEST raid_rebuild_test_sb_io 00:22:55.557 ************************************ 00:22:55.557 11:34:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:22:55.557 11:34:11 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:22:55.557 11:34:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:55.557 11:34:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:55.557 11:34:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:55.557 ************************************ 00:22:55.557 START TEST raid_rebuild_test 00:22:55.557 ************************************ 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=88519 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 88519 /var/tmp/spdk-raid.sock 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88519 ']' 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:55.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:55.557 11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:55.558 11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:55.558 11:34:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.817 [2024-07-25 11:34:11.524115] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:22:55.817 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:55.817 Zero copy mechanism will not be used. 00:22:55.817 [2024-07-25 11:34:11.524310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88519 ] 00:22:55.817 [2024-07-25 11:34:11.697026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.075 [2024-07-25 11:34:11.937700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.334 [2024-07-25 11:34:12.141755] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.334 [2024-07-25 11:34:12.141841] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.902 11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.902 11:34:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:22:56.902 11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:56.902 11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:57.159 BaseBdev1_malloc 00:22:57.159 11:34:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:57.417 [2024-07-25 11:34:13.113524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:57.417 [2024-07-25 11:34:13.113644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.417 [2024-07-25 11:34:13.113687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:57.417 [2024-07-25 11:34:13.113704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.417 [2024-07-25 11:34:13.116551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.417 [2024-07-25 11:34:13.116651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:57.417 BaseBdev1 00:22:57.417 11:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:57.417 11:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:57.676 BaseBdev2_malloc 00:22:57.676 11:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:57.934 [2024-07-25 11:34:13.692941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:57.934 [2024-07-25 11:34:13.693080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.934 [2024-07-25 11:34:13.693120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:57.934 [2024-07-25 11:34:13.693135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.934 [2024-07-25 11:34:13.695921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.934 [2024-07-25 11:34:13.695980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:57.934 BaseBdev2 00:22:57.934 11:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:57.934 11:34:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:58.191 BaseBdev3_malloc 00:22:58.448 11:34:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:58.706 [2024-07-25 11:34:14.386186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:58.706 [2024-07-25 11:34:14.386281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.706 [2024-07-25 11:34:14.386333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:58.706 [2024-07-25 11:34:14.386354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.706 [2024-07-25 11:34:14.389572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.706 [2024-07-25 11:34:14.389641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:58.706 BaseBdev3 00:22:58.706 11:34:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:58.706 11:34:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:58.964 BaseBdev4_malloc 00:22:58.964 11:34:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:59.233 [2024-07-25 11:34:14.994206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:59.233 [2024-07-25 11:34:14.994295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.233 [2024-07-25 11:34:14.994333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:59.233 [2024-07-25 11:34:14.994349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.233 [2024-07-25 11:34:14.997357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.233 [2024-07-25 11:34:14.997403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:59.233 BaseBdev4 00:22:59.233 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:59.518 spare_malloc 00:22:59.518 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:59.776 spare_delay 00:22:59.776 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:00.034 [2024-07-25 11:34:15.701470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:00.034 [2024-07-25 11:34:15.701575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.034 [2024-07-25 11:34:15.701613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:00.034 [2024-07-25 11:34:15.701629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.034 [2024-07-25 11:34:15.704474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.034 [2024-07-25 11:34:15.704518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:00.034 spare 00:23:00.034 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:00.292 [2024-07-25 11:34:15.937632] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:00.292 [2024-07-25 11:34:15.940093] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:00.292 [2024-07-25 11:34:15.940231] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:00.292 [2024-07-25 11:34:15.940310] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:00.292 [2024-07-25 11:34:15.940481] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:00.292 [2024-07-25 11:34:15.940497] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:00.292 [2024-07-25 11:34:15.940995] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:00.292 [2024-07-25 11:34:15.941242] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:00.292 [2024-07-25 11:34:15.941263] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:00.292 [2024-07-25 11:34:15.941556] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.292 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:00.292 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:00.292 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:00.292 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:00.292 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:00.292 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:00.292 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.292 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.293 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.293 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.293 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.293 11:34:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.551 11:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.551 "name": "raid_bdev1", 00:23:00.551 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:00.551 "strip_size_kb": 0, 00:23:00.551 "state": "online", 00:23:00.551 "raid_level": "raid1", 00:23:00.551 "superblock": false, 00:23:00.551 "num_base_bdevs": 4, 00:23:00.551 "num_base_bdevs_discovered": 4, 00:23:00.551 "num_base_bdevs_operational": 4, 00:23:00.551 "base_bdevs_list": [ 00:23:00.551 { 00:23:00.551 "name": "BaseBdev1", 00:23:00.551 "uuid": "ac8f975d-9db2-5d2e-991a-695deec45e76", 00:23:00.551 "is_configured": true, 00:23:00.551 "data_offset": 0, 00:23:00.551 "data_size": 65536 00:23:00.551 }, 00:23:00.551 { 00:23:00.551 "name": "BaseBdev2", 00:23:00.551 "uuid": "874a64a2-3103-5dbb-b6f4-4a2260f73341", 00:23:00.551 "is_configured": true, 00:23:00.551 "data_offset": 0, 00:23:00.551 "data_size": 65536 00:23:00.551 }, 00:23:00.551 { 00:23:00.551 "name": "BaseBdev3", 00:23:00.551 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:00.551 "is_configured": true, 00:23:00.551 "data_offset": 0, 00:23:00.551 "data_size": 65536 00:23:00.551 }, 00:23:00.551 { 00:23:00.551 "name": "BaseBdev4", 00:23:00.551 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:00.551 "is_configured": true, 00:23:00.551 "data_offset": 0, 00:23:00.551 "data_size": 65536 00:23:00.551 } 00:23:00.551 ] 00:23:00.551 }' 00:23:00.551 11:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.551 11:34:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:01.117 11:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:01.117 11:34:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:23:01.376 [2024-07-25 11:34:17.014373] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:01.376 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:01.634 [2024-07-25 11:34:17.466151] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:01.634 /dev/nbd0 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:01.634 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:01.634 1+0 records in 00:23:01.635 1+0 records out 00:23:01.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005834 s, 7.0 MB/s 00:23:01.635 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.893 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:01.893 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.893 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:01.893 11:34:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:01.893 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:01.893 11:34:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:01.893 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:23:01.893 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:23:01.893 11:34:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:23:11.885 65536+0 records in 00:23:11.885 65536+0 records out 00:23:11.885 33554432 bytes (34 MB, 32 MiB) copied, 8.60928 s, 3.9 MB/s 00:23:11.885 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:11.885 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:11.885 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:11.885 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:11.885 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:11.885 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.885 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:11.886 [2024-07-25 11:34:26.407904] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:11.886 [2024-07-25 11:34:26.664126] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.886 "name": "raid_bdev1", 00:23:11.886 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:11.886 "strip_size_kb": 0, 00:23:11.886 "state": "online", 00:23:11.886 "raid_level": "raid1", 00:23:11.886 "superblock": false, 00:23:11.886 "num_base_bdevs": 4, 00:23:11.886 "num_base_bdevs_discovered": 3, 00:23:11.886 "num_base_bdevs_operational": 3, 00:23:11.886 "base_bdevs_list": [ 00:23:11.886 { 00:23:11.886 "name": null, 00:23:11.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.886 "is_configured": false, 00:23:11.886 "data_offset": 0, 00:23:11.886 "data_size": 65536 00:23:11.886 }, 00:23:11.886 { 00:23:11.886 "name": "BaseBdev2", 00:23:11.886 "uuid": "874a64a2-3103-5dbb-b6f4-4a2260f73341", 00:23:11.886 "is_configured": true, 00:23:11.886 "data_offset": 0, 00:23:11.886 "data_size": 65536 00:23:11.886 }, 00:23:11.886 { 00:23:11.886 "name": "BaseBdev3", 00:23:11.886 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:11.886 "is_configured": true, 00:23:11.886 "data_offset": 0, 00:23:11.886 "data_size": 65536 00:23:11.886 }, 00:23:11.886 { 00:23:11.886 "name": "BaseBdev4", 00:23:11.886 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:11.886 "is_configured": true, 00:23:11.886 "data_offset": 0, 00:23:11.886 "data_size": 65536 00:23:11.886 } 00:23:11.886 ] 00:23:11.886 }' 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.886 11:34:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.144 11:34:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:12.403 [2024-07-25 11:34:28.068635] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:12.403 [2024-07-25 11:34:28.086273] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:23:12.403 [2024-07-25 11:34:28.089237] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:12.403 11:34:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:13.333 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.333 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:13.333 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:13.333 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:13.333 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:13.333 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.334 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.591 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:13.591 "name": "raid_bdev1", 00:23:13.591 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:13.591 "strip_size_kb": 0, 00:23:13.591 "state": "online", 00:23:13.591 "raid_level": "raid1", 00:23:13.591 "superblock": false, 00:23:13.591 "num_base_bdevs": 4, 00:23:13.591 "num_base_bdevs_discovered": 4, 00:23:13.591 "num_base_bdevs_operational": 4, 00:23:13.591 "process": { 00:23:13.591 "type": "rebuild", 00:23:13.591 "target": "spare", 00:23:13.591 "progress": { 00:23:13.591 "blocks": 24576, 00:23:13.591 "percent": 37 00:23:13.591 } 00:23:13.591 }, 00:23:13.591 "base_bdevs_list": [ 00:23:13.591 { 00:23:13.591 "name": "spare", 00:23:13.591 "uuid": "6e1ae0c3-8203-521f-80e2-661c15f66452", 00:23:13.591 "is_configured": true, 00:23:13.591 "data_offset": 0, 00:23:13.591 "data_size": 65536 00:23:13.591 }, 00:23:13.591 { 00:23:13.591 "name": "BaseBdev2", 00:23:13.591 "uuid": "874a64a2-3103-5dbb-b6f4-4a2260f73341", 00:23:13.591 "is_configured": true, 00:23:13.591 "data_offset": 0, 00:23:13.591 "data_size": 65536 00:23:13.591 }, 00:23:13.591 { 00:23:13.591 "name": "BaseBdev3", 00:23:13.591 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:13.591 "is_configured": true, 00:23:13.591 "data_offset": 0, 00:23:13.591 "data_size": 65536 00:23:13.591 }, 00:23:13.591 { 00:23:13.591 "name": "BaseBdev4", 00:23:13.591 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:13.591 "is_configured": true, 00:23:13.591 "data_offset": 0, 00:23:13.591 "data_size": 65536 00:23:13.591 } 00:23:13.591 ] 00:23:13.591 }' 00:23:13.591 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:13.591 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:13.591 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:13.848 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:13.848 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:14.106 [2024-07-25 11:34:29.792018] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.106 [2024-07-25 11:34:29.803468] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:14.106 [2024-07-25 11:34:29.803570] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.106 [2024-07-25 11:34:29.803602] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.106 [2024-07-25 11:34:29.803616] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.106 11:34:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.364 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.364 "name": "raid_bdev1", 00:23:14.364 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:14.364 "strip_size_kb": 0, 00:23:14.364 "state": "online", 00:23:14.364 "raid_level": "raid1", 00:23:14.364 "superblock": false, 00:23:14.364 "num_base_bdevs": 4, 00:23:14.364 "num_base_bdevs_discovered": 3, 00:23:14.364 "num_base_bdevs_operational": 3, 00:23:14.364 "base_bdevs_list": [ 00:23:14.364 { 00:23:14.364 "name": null, 00:23:14.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.364 "is_configured": false, 00:23:14.364 "data_offset": 0, 00:23:14.364 "data_size": 65536 00:23:14.364 }, 00:23:14.364 { 00:23:14.364 "name": "BaseBdev2", 00:23:14.364 "uuid": "874a64a2-3103-5dbb-b6f4-4a2260f73341", 00:23:14.364 "is_configured": true, 00:23:14.364 "data_offset": 0, 00:23:14.364 "data_size": 65536 00:23:14.364 }, 00:23:14.364 { 00:23:14.364 "name": "BaseBdev3", 00:23:14.364 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:14.364 "is_configured": true, 00:23:14.364 "data_offset": 0, 00:23:14.364 "data_size": 65536 00:23:14.364 }, 00:23:14.364 { 00:23:14.364 "name": "BaseBdev4", 00:23:14.364 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:14.364 "is_configured": true, 00:23:14.364 "data_offset": 0, 00:23:14.364 "data_size": 65536 00:23:14.364 } 00:23:14.364 ] 00:23:14.364 }' 00:23:14.364 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.364 11:34:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.930 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:14.930 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:14.930 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:14.930 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:14.930 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:14.930 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.930 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.188 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:15.188 "name": "raid_bdev1", 00:23:15.188 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:15.188 "strip_size_kb": 0, 00:23:15.188 "state": "online", 00:23:15.188 "raid_level": "raid1", 00:23:15.188 "superblock": false, 00:23:15.188 "num_base_bdevs": 4, 00:23:15.188 "num_base_bdevs_discovered": 3, 00:23:15.188 "num_base_bdevs_operational": 3, 00:23:15.188 "base_bdevs_list": [ 00:23:15.188 { 00:23:15.188 "name": null, 00:23:15.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.188 "is_configured": false, 00:23:15.188 "data_offset": 0, 00:23:15.188 "data_size": 65536 00:23:15.188 }, 00:23:15.188 { 00:23:15.188 "name": "BaseBdev2", 00:23:15.188 "uuid": "874a64a2-3103-5dbb-b6f4-4a2260f73341", 00:23:15.188 "is_configured": true, 00:23:15.188 "data_offset": 0, 00:23:15.188 "data_size": 65536 00:23:15.188 }, 00:23:15.188 { 00:23:15.188 "name": "BaseBdev3", 00:23:15.188 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:15.188 "is_configured": true, 00:23:15.188 "data_offset": 0, 00:23:15.188 "data_size": 65536 00:23:15.188 }, 00:23:15.188 { 00:23:15.188 "name": "BaseBdev4", 00:23:15.188 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:15.188 "is_configured": true, 00:23:15.188 "data_offset": 0, 00:23:15.188 "data_size": 65536 00:23:15.188 } 00:23:15.188 ] 00:23:15.188 }' 00:23:15.188 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:15.188 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:15.188 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:15.188 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:15.188 11:34:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:15.446 [2024-07-25 11:34:31.199456] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:15.446 [2024-07-25 11:34:31.211327] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:23:15.446 [2024-07-25 11:34:31.213838] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:15.446 11:34:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:23:16.380 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:16.380 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:16.380 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:16.380 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:16.380 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:16.380 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.380 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.637 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:16.637 "name": "raid_bdev1", 00:23:16.637 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:16.637 "strip_size_kb": 0, 00:23:16.637 "state": "online", 00:23:16.637 "raid_level": "raid1", 00:23:16.637 "superblock": false, 00:23:16.637 "num_base_bdevs": 4, 00:23:16.637 "num_base_bdevs_discovered": 4, 00:23:16.637 "num_base_bdevs_operational": 4, 00:23:16.637 "process": { 00:23:16.637 "type": "rebuild", 00:23:16.637 "target": "spare", 00:23:16.637 "progress": { 00:23:16.637 "blocks": 24576, 00:23:16.637 "percent": 37 00:23:16.637 } 00:23:16.637 }, 00:23:16.637 "base_bdevs_list": [ 00:23:16.637 { 00:23:16.637 "name": "spare", 00:23:16.637 "uuid": "6e1ae0c3-8203-521f-80e2-661c15f66452", 00:23:16.637 "is_configured": true, 00:23:16.637 "data_offset": 0, 00:23:16.637 "data_size": 65536 00:23:16.637 }, 00:23:16.637 { 00:23:16.637 "name": "BaseBdev2", 00:23:16.637 "uuid": "874a64a2-3103-5dbb-b6f4-4a2260f73341", 00:23:16.637 "is_configured": true, 00:23:16.637 "data_offset": 0, 00:23:16.637 "data_size": 65536 00:23:16.637 }, 00:23:16.637 { 00:23:16.637 "name": "BaseBdev3", 00:23:16.637 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:16.637 "is_configured": true, 00:23:16.637 "data_offset": 0, 00:23:16.637 "data_size": 65536 00:23:16.637 }, 00:23:16.637 { 00:23:16.637 "name": "BaseBdev4", 00:23:16.637 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:16.637 "is_configured": true, 00:23:16.637 "data_offset": 0, 00:23:16.637 "data_size": 65536 00:23:16.637 } 00:23:16.637 ] 00:23:16.637 }' 00:23:16.637 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:16.894 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:16.894 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:16.894 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:16.894 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:23:16.894 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:23:16.894 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:23:16.894 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:23:16.894 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:17.152 [2024-07-25 11:34:32.804097] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:17.152 [2024-07-25 11:34:32.824586] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:23:17.152 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:23:17.152 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:23:17.152 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.152 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:17.152 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:17.152 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:17.152 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:17.152 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.152 11:34:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:17.411 "name": "raid_bdev1", 00:23:17.411 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:17.411 "strip_size_kb": 0, 00:23:17.411 "state": "online", 00:23:17.411 "raid_level": "raid1", 00:23:17.411 "superblock": false, 00:23:17.411 "num_base_bdevs": 4, 00:23:17.411 "num_base_bdevs_discovered": 3, 00:23:17.411 "num_base_bdevs_operational": 3, 00:23:17.411 "process": { 00:23:17.411 "type": "rebuild", 00:23:17.411 "target": "spare", 00:23:17.411 "progress": { 00:23:17.411 "blocks": 36864, 00:23:17.411 "percent": 56 00:23:17.411 } 00:23:17.411 }, 00:23:17.411 "base_bdevs_list": [ 00:23:17.411 { 00:23:17.411 "name": "spare", 00:23:17.411 "uuid": "6e1ae0c3-8203-521f-80e2-661c15f66452", 00:23:17.411 "is_configured": true, 00:23:17.411 "data_offset": 0, 00:23:17.411 "data_size": 65536 00:23:17.411 }, 00:23:17.411 { 00:23:17.411 "name": null, 00:23:17.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.411 "is_configured": false, 00:23:17.411 "data_offset": 0, 00:23:17.411 "data_size": 65536 00:23:17.411 }, 00:23:17.411 { 00:23:17.411 "name": "BaseBdev3", 00:23:17.411 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:17.411 "is_configured": true, 00:23:17.411 "data_offset": 0, 00:23:17.411 "data_size": 65536 00:23:17.411 }, 00:23:17.411 { 00:23:17.411 "name": "BaseBdev4", 00:23:17.411 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:17.411 "is_configured": true, 00:23:17.411 "data_offset": 0, 00:23:17.411 "data_size": 65536 00:23:17.411 } 00:23:17.411 ] 00:23:17.411 }' 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=1037 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.411 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.669 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:17.669 "name": "raid_bdev1", 00:23:17.669 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:17.669 "strip_size_kb": 0, 00:23:17.669 "state": "online", 00:23:17.669 "raid_level": "raid1", 00:23:17.669 "superblock": false, 00:23:17.669 "num_base_bdevs": 4, 00:23:17.669 "num_base_bdevs_discovered": 3, 00:23:17.669 "num_base_bdevs_operational": 3, 00:23:17.669 "process": { 00:23:17.669 "type": "rebuild", 00:23:17.669 "target": "spare", 00:23:17.669 "progress": { 00:23:17.669 "blocks": 45056, 00:23:17.669 "percent": 68 00:23:17.669 } 00:23:17.669 }, 00:23:17.669 "base_bdevs_list": [ 00:23:17.669 { 00:23:17.669 "name": "spare", 00:23:17.669 "uuid": "6e1ae0c3-8203-521f-80e2-661c15f66452", 00:23:17.669 "is_configured": true, 00:23:17.669 "data_offset": 0, 00:23:17.669 "data_size": 65536 00:23:17.669 }, 00:23:17.669 { 00:23:17.669 "name": null, 00:23:17.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.670 "is_configured": false, 00:23:17.670 "data_offset": 0, 00:23:17.670 "data_size": 65536 00:23:17.670 }, 00:23:17.670 { 00:23:17.670 "name": "BaseBdev3", 00:23:17.670 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:17.670 "is_configured": true, 00:23:17.670 "data_offset": 0, 00:23:17.670 "data_size": 65536 00:23:17.670 }, 00:23:17.670 { 00:23:17.670 "name": "BaseBdev4", 00:23:17.670 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:17.670 "is_configured": true, 00:23:17.670 "data_offset": 0, 00:23:17.670 "data_size": 65536 00:23:17.670 } 00:23:17.670 ] 00:23:17.670 }' 00:23:17.670 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:17.670 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.670 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:17.926 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.926 11:34:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:18.878 [2024-07-25 11:34:34.435010] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:18.878 [2024-07-25 11:34:34.435118] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:18.878 [2024-07-25 11:34:34.435217] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.878 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:18.878 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.878 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:18.878 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:18.878 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:18.878 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:18.878 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.878 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.136 "name": "raid_bdev1", 00:23:19.136 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:19.136 "strip_size_kb": 0, 00:23:19.136 "state": "online", 00:23:19.136 "raid_level": "raid1", 00:23:19.136 "superblock": false, 00:23:19.136 "num_base_bdevs": 4, 00:23:19.136 "num_base_bdevs_discovered": 3, 00:23:19.136 "num_base_bdevs_operational": 3, 00:23:19.136 "base_bdevs_list": [ 00:23:19.136 { 00:23:19.136 "name": "spare", 00:23:19.136 "uuid": "6e1ae0c3-8203-521f-80e2-661c15f66452", 00:23:19.136 "is_configured": true, 00:23:19.136 "data_offset": 0, 00:23:19.136 "data_size": 65536 00:23:19.136 }, 00:23:19.136 { 00:23:19.136 "name": null, 00:23:19.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.136 "is_configured": false, 00:23:19.136 "data_offset": 0, 00:23:19.136 "data_size": 65536 00:23:19.136 }, 00:23:19.136 { 00:23:19.136 "name": "BaseBdev3", 00:23:19.136 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:19.136 "is_configured": true, 00:23:19.136 "data_offset": 0, 00:23:19.136 "data_size": 65536 00:23:19.136 }, 00:23:19.136 { 00:23:19.136 "name": "BaseBdev4", 00:23:19.136 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:19.136 "is_configured": true, 00:23:19.136 "data_offset": 0, 00:23:19.136 "data_size": 65536 00:23:19.136 } 00:23:19.136 ] 00:23:19.136 }' 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.136 11:34:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.394 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.394 "name": "raid_bdev1", 00:23:19.394 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:19.394 "strip_size_kb": 0, 00:23:19.394 "state": "online", 00:23:19.394 "raid_level": "raid1", 00:23:19.394 "superblock": false, 00:23:19.394 "num_base_bdevs": 4, 00:23:19.394 "num_base_bdevs_discovered": 3, 00:23:19.394 "num_base_bdevs_operational": 3, 00:23:19.394 "base_bdevs_list": [ 00:23:19.394 { 00:23:19.394 "name": "spare", 00:23:19.394 "uuid": "6e1ae0c3-8203-521f-80e2-661c15f66452", 00:23:19.394 "is_configured": true, 00:23:19.394 "data_offset": 0, 00:23:19.394 "data_size": 65536 00:23:19.394 }, 00:23:19.394 { 00:23:19.394 "name": null, 00:23:19.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.394 "is_configured": false, 00:23:19.394 "data_offset": 0, 00:23:19.394 "data_size": 65536 00:23:19.394 }, 00:23:19.394 { 00:23:19.394 "name": "BaseBdev3", 00:23:19.395 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:19.395 "is_configured": true, 00:23:19.395 "data_offset": 0, 00:23:19.395 "data_size": 65536 00:23:19.395 }, 00:23:19.395 { 00:23:19.395 "name": "BaseBdev4", 00:23:19.395 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:19.395 "is_configured": true, 00:23:19.395 "data_offset": 0, 00:23:19.395 "data_size": 65536 00:23:19.395 } 00:23:19.395 ] 00:23:19.395 }' 00:23:19.395 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:19.395 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:19.395 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.652 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.653 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.911 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.911 "name": "raid_bdev1", 00:23:19.911 "uuid": "648728ea-cf3c-4602-b1e6-8ee2296ee0e3", 00:23:19.911 "strip_size_kb": 0, 00:23:19.911 "state": "online", 00:23:19.911 "raid_level": "raid1", 00:23:19.911 "superblock": false, 00:23:19.911 "num_base_bdevs": 4, 00:23:19.911 "num_base_bdevs_discovered": 3, 00:23:19.911 "num_base_bdevs_operational": 3, 00:23:19.911 "base_bdevs_list": [ 00:23:19.911 { 00:23:19.911 "name": "spare", 00:23:19.911 "uuid": "6e1ae0c3-8203-521f-80e2-661c15f66452", 00:23:19.911 "is_configured": true, 00:23:19.911 "data_offset": 0, 00:23:19.911 "data_size": 65536 00:23:19.911 }, 00:23:19.911 { 00:23:19.911 "name": null, 00:23:19.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.911 "is_configured": false, 00:23:19.911 "data_offset": 0, 00:23:19.911 "data_size": 65536 00:23:19.911 }, 00:23:19.911 { 00:23:19.911 "name": "BaseBdev3", 00:23:19.911 "uuid": "7dba3bea-c082-5229-a333-dc39c83cedc6", 00:23:19.911 "is_configured": true, 00:23:19.911 "data_offset": 0, 00:23:19.911 "data_size": 65536 00:23:19.911 }, 00:23:19.911 { 00:23:19.911 "name": "BaseBdev4", 00:23:19.911 "uuid": "9be3a8cd-3b22-5131-984a-722085c88d33", 00:23:19.911 "is_configured": true, 00:23:19.911 "data_offset": 0, 00:23:19.911 "data_size": 65536 00:23:19.911 } 00:23:19.911 ] 00:23:19.911 }' 00:23:19.911 11:34:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.911 11:34:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.479 11:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:20.737 [2024-07-25 11:34:36.462118] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:20.737 [2024-07-25 11:34:36.462153] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:20.737 [2024-07-25 11:34:36.462278] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.737 [2024-07-25 11:34:36.462378] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.737 [2024-07-25 11:34:36.462398] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:20.737 11:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.737 11:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:20.995 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:21.254 /dev/nbd0 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:21.254 1+0 records in 00:23:21.254 1+0 records out 00:23:21.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256894 s, 15.9 MB/s 00:23:21.254 11:34:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:21.254 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:21.254 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:21.254 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:21.254 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:21.254 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:21.254 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:21.254 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:21.513 /dev/nbd1 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:21.513 1+0 records in 00:23:21.513 1+0 records out 00:23:21.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003201 s, 12.8 MB/s 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:21.513 11:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:21.771 11:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:21.771 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:21.771 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:21.771 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:21.771 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:23:21.771 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:21.771 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:22.030 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:22.030 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:22.030 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:22.030 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:22.030 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:22.030 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:22.030 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:22.030 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:22.030 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:22.030 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 88519 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88519 ']' 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88519 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88519 00:23:22.288 killing process with pid 88519 00:23:22.288 Received shutdown signal, test time was about 60.000000 seconds 00:23:22.288 00:23:22.288 Latency(us) 00:23:22.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.288 =================================================================================================================== 00:23:22.288 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88519' 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88519 00:23:22.288 [2024-07-25 11:34:37.977718] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:22.288 11:34:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88519 00:23:22.547 [2024-07-25 11:34:38.404750] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:23.923 ************************************ 00:23:23.923 END TEST raid_rebuild_test 00:23:23.923 ************************************ 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:23:23.923 00:23:23.923 real 0m28.118s 00:23:23.923 user 0m37.852s 00:23:23.923 sys 0m4.590s 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.923 11:34:39 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:23:23.923 11:34:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:23.923 11:34:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:23.923 11:34:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:23.923 ************************************ 00:23:23.923 START TEST raid_rebuild_test_sb 00:23:23.923 ************************************ 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=89067 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 89067 /var/tmp/spdk-raid.sock 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 89067 ']' 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:23.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.923 11:34:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.923 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:23.923 Zero copy mechanism will not be used. 00:23:23.923 [2024-07-25 11:34:39.701039] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:23:23.923 [2024-07-25 11:34:39.701238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89067 ] 00:23:24.181 [2024-07-25 11:34:39.877113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.439 [2024-07-25 11:34:40.100579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.439 [2024-07-25 11:34:40.297367] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.439 [2024-07-25 11:34:40.297449] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:25.006 11:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.006 11:34:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:23:25.006 11:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:25.006 11:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:25.265 BaseBdev1_malloc 00:23:25.265 11:34:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:25.524 [2024-07-25 11:34:41.149907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:25.524 [2024-07-25 11:34:41.149995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.524 [2024-07-25 11:34:41.150038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:25.524 [2024-07-25 11:34:41.150055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.524 [2024-07-25 11:34:41.152852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.524 [2024-07-25 11:34:41.152899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:25.524 BaseBdev1 00:23:25.524 11:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:25.524 11:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:25.784 BaseBdev2_malloc 00:23:25.784 11:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:26.047 [2024-07-25 11:34:41.712569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:26.047 [2024-07-25 11:34:41.712688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.047 [2024-07-25 11:34:41.712731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:26.047 [2024-07-25 11:34:41.712747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.047 [2024-07-25 11:34:41.715586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.047 [2024-07-25 11:34:41.715651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:26.047 BaseBdev2 00:23:26.047 11:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:26.047 11:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:26.313 BaseBdev3_malloc 00:23:26.313 11:34:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:26.581 [2024-07-25 11:34:42.253177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:26.581 [2024-07-25 11:34:42.253261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.581 [2024-07-25 11:34:42.253303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:26.581 [2024-07-25 11:34:42.253320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.581 [2024-07-25 11:34:42.256129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.581 [2024-07-25 11:34:42.256174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:26.581 BaseBdev3 00:23:26.581 11:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:26.581 11:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:26.851 BaseBdev4_malloc 00:23:26.851 11:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:27.124 [2024-07-25 11:34:42.753779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:27.124 [2024-07-25 11:34:42.753859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.124 [2024-07-25 11:34:42.753905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:27.124 [2024-07-25 11:34:42.753923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.124 [2024-07-25 11:34:42.756719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.124 [2024-07-25 11:34:42.756772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:27.124 BaseBdev4 00:23:27.124 11:34:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:27.386 spare_malloc 00:23:27.386 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:27.386 spare_delay 00:23:27.386 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:27.646 [2024-07-25 11:34:43.470646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:27.646 [2024-07-25 11:34:43.470738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.647 [2024-07-25 11:34:43.470779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:27.647 [2024-07-25 11:34:43.470795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.647 [2024-07-25 11:34:43.473744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.647 [2024-07-25 11:34:43.473790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:27.647 spare 00:23:27.647 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:27.908 [2024-07-25 11:34:43.702809] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:27.908 [2024-07-25 11:34:43.705244] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:27.908 [2024-07-25 11:34:43.705348] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:27.908 [2024-07-25 11:34:43.705437] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:27.908 [2024-07-25 11:34:43.705745] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:27.908 [2024-07-25 11:34:43.705765] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:27.908 [2024-07-25 11:34:43.706166] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:27.908 [2024-07-25 11:34:43.706399] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:27.908 [2024-07-25 11:34:43.706421] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:27.908 [2024-07-25 11:34:43.706700] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.908 11:34:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.167 11:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:28.167 "name": "raid_bdev1", 00:23:28.167 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:28.167 "strip_size_kb": 0, 00:23:28.167 "state": "online", 00:23:28.167 "raid_level": "raid1", 00:23:28.167 "superblock": true, 00:23:28.167 "num_base_bdevs": 4, 00:23:28.167 "num_base_bdevs_discovered": 4, 00:23:28.167 "num_base_bdevs_operational": 4, 00:23:28.167 "base_bdevs_list": [ 00:23:28.167 { 00:23:28.167 "name": "BaseBdev1", 00:23:28.167 "uuid": "3573077e-ce5d-5b7e-b9c4-934217c03e00", 00:23:28.167 "is_configured": true, 00:23:28.167 "data_offset": 2048, 00:23:28.167 "data_size": 63488 00:23:28.167 }, 00:23:28.167 { 00:23:28.167 "name": "BaseBdev2", 00:23:28.167 "uuid": "0ca689eb-886c-5809-a578-2c8ea9729f94", 00:23:28.167 "is_configured": true, 00:23:28.167 "data_offset": 2048, 00:23:28.167 "data_size": 63488 00:23:28.167 }, 00:23:28.167 { 00:23:28.167 "name": "BaseBdev3", 00:23:28.167 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:28.167 "is_configured": true, 00:23:28.167 "data_offset": 2048, 00:23:28.167 "data_size": 63488 00:23:28.167 }, 00:23:28.167 { 00:23:28.167 "name": "BaseBdev4", 00:23:28.167 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:28.167 "is_configured": true, 00:23:28.167 "data_offset": 2048, 00:23:28.167 "data_size": 63488 00:23:28.167 } 00:23:28.167 ] 00:23:28.167 }' 00:23:28.167 11:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:28.167 11:34:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:29.103 11:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:23:29.103 11:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:29.103 [2024-07-25 11:34:44.883566] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.103 11:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:23:29.103 11:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:29.103 11:34:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:29.362 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:29.621 [2024-07-25 11:34:45.423275] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:29.621 /dev/nbd0 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:29.621 1+0 records in 00:23:29.621 1+0 records out 00:23:29.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035732 s, 11.5 MB/s 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:23:29.621 11:34:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:39.627 63488+0 records in 00:23:39.627 63488+0 records out 00:23:39.627 32505856 bytes (33 MB, 31 MiB) copied, 8.17729 s, 4.0 MB/s 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:39.627 [2024-07-25 11:34:53.930749] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:39.627 11:34:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:39.627 [2024-07-25 11:34:54.210871] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.627 "name": "raid_bdev1", 00:23:39.627 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:39.627 "strip_size_kb": 0, 00:23:39.627 "state": "online", 00:23:39.627 "raid_level": "raid1", 00:23:39.627 "superblock": true, 00:23:39.627 "num_base_bdevs": 4, 00:23:39.627 "num_base_bdevs_discovered": 3, 00:23:39.627 "num_base_bdevs_operational": 3, 00:23:39.627 "base_bdevs_list": [ 00:23:39.627 { 00:23:39.627 "name": null, 00:23:39.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.627 "is_configured": false, 00:23:39.627 "data_offset": 2048, 00:23:39.627 "data_size": 63488 00:23:39.627 }, 00:23:39.627 { 00:23:39.627 "name": "BaseBdev2", 00:23:39.627 "uuid": "0ca689eb-886c-5809-a578-2c8ea9729f94", 00:23:39.627 "is_configured": true, 00:23:39.627 "data_offset": 2048, 00:23:39.627 "data_size": 63488 00:23:39.627 }, 00:23:39.627 { 00:23:39.627 "name": "BaseBdev3", 00:23:39.627 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:39.627 "is_configured": true, 00:23:39.627 "data_offset": 2048, 00:23:39.627 "data_size": 63488 00:23:39.627 }, 00:23:39.627 { 00:23:39.627 "name": "BaseBdev4", 00:23:39.627 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:39.627 "is_configured": true, 00:23:39.627 "data_offset": 2048, 00:23:39.627 "data_size": 63488 00:23:39.627 } 00:23:39.627 ] 00:23:39.627 }' 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.627 11:34:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.627 11:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:39.627 [2024-07-25 11:34:55.351182] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:39.627 [2024-07-25 11:34:55.364932] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:23:39.627 [2024-07-25 11:34:55.367457] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:39.627 11:34:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:40.570 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.570 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:40.570 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:40.570 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:40.570 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:40.570 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.570 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.831 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:40.831 "name": "raid_bdev1", 00:23:40.831 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:40.831 "strip_size_kb": 0, 00:23:40.831 "state": "online", 00:23:40.831 "raid_level": "raid1", 00:23:40.831 "superblock": true, 00:23:40.831 "num_base_bdevs": 4, 00:23:40.831 "num_base_bdevs_discovered": 4, 00:23:40.831 "num_base_bdevs_operational": 4, 00:23:40.831 "process": { 00:23:40.831 "type": "rebuild", 00:23:40.831 "target": "spare", 00:23:40.831 "progress": { 00:23:40.831 "blocks": 24576, 00:23:40.831 "percent": 38 00:23:40.831 } 00:23:40.831 }, 00:23:40.831 "base_bdevs_list": [ 00:23:40.831 { 00:23:40.831 "name": "spare", 00:23:40.831 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:40.831 "is_configured": true, 00:23:40.831 "data_offset": 2048, 00:23:40.831 "data_size": 63488 00:23:40.831 }, 00:23:40.831 { 00:23:40.831 "name": "BaseBdev2", 00:23:40.831 "uuid": "0ca689eb-886c-5809-a578-2c8ea9729f94", 00:23:40.831 "is_configured": true, 00:23:40.831 "data_offset": 2048, 00:23:40.831 "data_size": 63488 00:23:40.831 }, 00:23:40.831 { 00:23:40.831 "name": "BaseBdev3", 00:23:40.831 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:40.831 "is_configured": true, 00:23:40.831 "data_offset": 2048, 00:23:40.831 "data_size": 63488 00:23:40.831 }, 00:23:40.831 { 00:23:40.831 "name": "BaseBdev4", 00:23:40.831 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:40.831 "is_configured": true, 00:23:40.831 "data_offset": 2048, 00:23:40.831 "data_size": 63488 00:23:40.831 } 00:23:40.831 ] 00:23:40.831 }' 00:23:40.831 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:41.089 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.089 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:41.089 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.089 11:34:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:41.346 [2024-07-25 11:34:56.973496] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.346 [2024-07-25 11:34:56.979449] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:41.346 [2024-07-25 11:34:56.979531] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.346 [2024-07-25 11:34:56.979562] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.346 [2024-07-25 11:34:56.979574] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.346 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.604 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.604 "name": "raid_bdev1", 00:23:41.604 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:41.604 "strip_size_kb": 0, 00:23:41.604 "state": "online", 00:23:41.604 "raid_level": "raid1", 00:23:41.604 "superblock": true, 00:23:41.604 "num_base_bdevs": 4, 00:23:41.604 "num_base_bdevs_discovered": 3, 00:23:41.604 "num_base_bdevs_operational": 3, 00:23:41.604 "base_bdevs_list": [ 00:23:41.604 { 00:23:41.604 "name": null, 00:23:41.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.604 "is_configured": false, 00:23:41.604 "data_offset": 2048, 00:23:41.604 "data_size": 63488 00:23:41.604 }, 00:23:41.604 { 00:23:41.604 "name": "BaseBdev2", 00:23:41.604 "uuid": "0ca689eb-886c-5809-a578-2c8ea9729f94", 00:23:41.604 "is_configured": true, 00:23:41.604 "data_offset": 2048, 00:23:41.604 "data_size": 63488 00:23:41.604 }, 00:23:41.604 { 00:23:41.604 "name": "BaseBdev3", 00:23:41.604 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:41.604 "is_configured": true, 00:23:41.604 "data_offset": 2048, 00:23:41.604 "data_size": 63488 00:23:41.604 }, 00:23:41.604 { 00:23:41.604 "name": "BaseBdev4", 00:23:41.604 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:41.604 "is_configured": true, 00:23:41.604 "data_offset": 2048, 00:23:41.604 "data_size": 63488 00:23:41.604 } 00:23:41.604 ] 00:23:41.604 }' 00:23:41.604 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.604 11:34:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.170 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.170 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:42.170 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:42.170 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:42.170 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:42.170 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.170 11:34:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.429 11:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:42.429 "name": "raid_bdev1", 00:23:42.429 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:42.429 "strip_size_kb": 0, 00:23:42.429 "state": "online", 00:23:42.429 "raid_level": "raid1", 00:23:42.429 "superblock": true, 00:23:42.429 "num_base_bdevs": 4, 00:23:42.429 "num_base_bdevs_discovered": 3, 00:23:42.429 "num_base_bdevs_operational": 3, 00:23:42.429 "base_bdevs_list": [ 00:23:42.429 { 00:23:42.429 "name": null, 00:23:42.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.429 "is_configured": false, 00:23:42.429 "data_offset": 2048, 00:23:42.429 "data_size": 63488 00:23:42.429 }, 00:23:42.429 { 00:23:42.429 "name": "BaseBdev2", 00:23:42.429 "uuid": "0ca689eb-886c-5809-a578-2c8ea9729f94", 00:23:42.429 "is_configured": true, 00:23:42.429 "data_offset": 2048, 00:23:42.429 "data_size": 63488 00:23:42.429 }, 00:23:42.429 { 00:23:42.429 "name": "BaseBdev3", 00:23:42.429 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:42.429 "is_configured": true, 00:23:42.429 "data_offset": 2048, 00:23:42.429 "data_size": 63488 00:23:42.429 }, 00:23:42.429 { 00:23:42.429 "name": "BaseBdev4", 00:23:42.429 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:42.429 "is_configured": true, 00:23:42.429 "data_offset": 2048, 00:23:42.429 "data_size": 63488 00:23:42.429 } 00:23:42.429 ] 00:23:42.429 }' 00:23:42.429 11:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:42.429 11:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:42.429 11:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:42.429 11:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:42.429 11:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:42.687 [2024-07-25 11:34:58.550642] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:42.687 [2024-07-25 11:34:58.563667] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:23:42.687 [2024-07-25 11:34:58.566128] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:42.946 11:34:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:23:43.881 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.881 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:43.881 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:43.881 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:43.881 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:43.881 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.881 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.139 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.139 "name": "raid_bdev1", 00:23:44.139 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:44.139 "strip_size_kb": 0, 00:23:44.139 "state": "online", 00:23:44.139 "raid_level": "raid1", 00:23:44.139 "superblock": true, 00:23:44.139 "num_base_bdevs": 4, 00:23:44.139 "num_base_bdevs_discovered": 4, 00:23:44.139 "num_base_bdevs_operational": 4, 00:23:44.139 "process": { 00:23:44.139 "type": "rebuild", 00:23:44.139 "target": "spare", 00:23:44.139 "progress": { 00:23:44.139 "blocks": 24576, 00:23:44.139 "percent": 38 00:23:44.139 } 00:23:44.139 }, 00:23:44.139 "base_bdevs_list": [ 00:23:44.139 { 00:23:44.139 "name": "spare", 00:23:44.139 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:44.139 "is_configured": true, 00:23:44.139 "data_offset": 2048, 00:23:44.139 "data_size": 63488 00:23:44.139 }, 00:23:44.139 { 00:23:44.139 "name": "BaseBdev2", 00:23:44.139 "uuid": "0ca689eb-886c-5809-a578-2c8ea9729f94", 00:23:44.139 "is_configured": true, 00:23:44.139 "data_offset": 2048, 00:23:44.139 "data_size": 63488 00:23:44.139 }, 00:23:44.139 { 00:23:44.139 "name": "BaseBdev3", 00:23:44.139 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:44.139 "is_configured": true, 00:23:44.139 "data_offset": 2048, 00:23:44.139 "data_size": 63488 00:23:44.139 }, 00:23:44.139 { 00:23:44.139 "name": "BaseBdev4", 00:23:44.139 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:44.139 "is_configured": true, 00:23:44.139 "data_offset": 2048, 00:23:44.139 "data_size": 63488 00:23:44.139 } 00:23:44.139 ] 00:23:44.139 }' 00:23:44.139 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:44.139 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.139 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:44.139 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.139 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:23:44.139 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:23:44.139 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:23:44.139 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:23:44.139 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:23:44.140 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:23:44.140 11:34:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:44.398 [2024-07-25 11:35:00.196698] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:44.657 [2024-07-25 11:35:00.377995] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:23:44.657 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:23:44.657 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:23:44.657 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.657 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:44.657 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:44.657 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:44.657 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:44.657 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.657 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.914 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.914 "name": "raid_bdev1", 00:23:44.914 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:44.914 "strip_size_kb": 0, 00:23:44.914 "state": "online", 00:23:44.914 "raid_level": "raid1", 00:23:44.914 "superblock": true, 00:23:44.914 "num_base_bdevs": 4, 00:23:44.914 "num_base_bdevs_discovered": 3, 00:23:44.914 "num_base_bdevs_operational": 3, 00:23:44.914 "process": { 00:23:44.914 "type": "rebuild", 00:23:44.914 "target": "spare", 00:23:44.914 "progress": { 00:23:44.914 "blocks": 38912, 00:23:44.914 "percent": 61 00:23:44.914 } 00:23:44.914 }, 00:23:44.914 "base_bdevs_list": [ 00:23:44.914 { 00:23:44.914 "name": "spare", 00:23:44.915 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:44.915 "is_configured": true, 00:23:44.915 "data_offset": 2048, 00:23:44.915 "data_size": 63488 00:23:44.915 }, 00:23:44.915 { 00:23:44.915 "name": null, 00:23:44.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.915 "is_configured": false, 00:23:44.915 "data_offset": 2048, 00:23:44.915 "data_size": 63488 00:23:44.915 }, 00:23:44.915 { 00:23:44.915 "name": "BaseBdev3", 00:23:44.915 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:44.915 "is_configured": true, 00:23:44.915 "data_offset": 2048, 00:23:44.915 "data_size": 63488 00:23:44.915 }, 00:23:44.915 { 00:23:44.915 "name": "BaseBdev4", 00:23:44.915 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:44.915 "is_configured": true, 00:23:44.915 "data_offset": 2048, 00:23:44.915 "data_size": 63488 00:23:44.915 } 00:23:44.915 ] 00:23:44.915 }' 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1064 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.915 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.173 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:45.173 "name": "raid_bdev1", 00:23:45.173 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:45.173 "strip_size_kb": 0, 00:23:45.173 "state": "online", 00:23:45.173 "raid_level": "raid1", 00:23:45.173 "superblock": true, 00:23:45.173 "num_base_bdevs": 4, 00:23:45.173 "num_base_bdevs_discovered": 3, 00:23:45.173 "num_base_bdevs_operational": 3, 00:23:45.173 "process": { 00:23:45.173 "type": "rebuild", 00:23:45.173 "target": "spare", 00:23:45.173 "progress": { 00:23:45.173 "blocks": 45056, 00:23:45.173 "percent": 70 00:23:45.173 } 00:23:45.173 }, 00:23:45.173 "base_bdevs_list": [ 00:23:45.173 { 00:23:45.173 "name": "spare", 00:23:45.173 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:45.173 "is_configured": true, 00:23:45.173 "data_offset": 2048, 00:23:45.173 "data_size": 63488 00:23:45.173 }, 00:23:45.173 { 00:23:45.173 "name": null, 00:23:45.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.173 "is_configured": false, 00:23:45.173 "data_offset": 2048, 00:23:45.173 "data_size": 63488 00:23:45.173 }, 00:23:45.173 { 00:23:45.173 "name": "BaseBdev3", 00:23:45.173 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:45.173 "is_configured": true, 00:23:45.173 "data_offset": 2048, 00:23:45.173 "data_size": 63488 00:23:45.173 }, 00:23:45.173 { 00:23:45.173 "name": "BaseBdev4", 00:23:45.173 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:45.173 "is_configured": true, 00:23:45.173 "data_offset": 2048, 00:23:45.173 "data_size": 63488 00:23:45.173 } 00:23:45.173 ] 00:23:45.173 }' 00:23:45.173 11:35:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:45.173 11:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:45.173 11:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:45.430 11:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.430 11:35:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:45.995 [2024-07-25 11:35:01.787992] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:45.995 [2024-07-25 11:35:01.788103] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:45.995 [2024-07-25 11:35:01.788259] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.252 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:46.252 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.252 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:46.252 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:46.252 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:46.252 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:46.252 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.252 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.510 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:46.510 "name": "raid_bdev1", 00:23:46.510 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:46.510 "strip_size_kb": 0, 00:23:46.510 "state": "online", 00:23:46.510 "raid_level": "raid1", 00:23:46.510 "superblock": true, 00:23:46.510 "num_base_bdevs": 4, 00:23:46.510 "num_base_bdevs_discovered": 3, 00:23:46.510 "num_base_bdevs_operational": 3, 00:23:46.510 "base_bdevs_list": [ 00:23:46.510 { 00:23:46.510 "name": "spare", 00:23:46.510 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:46.510 "is_configured": true, 00:23:46.510 "data_offset": 2048, 00:23:46.510 "data_size": 63488 00:23:46.510 }, 00:23:46.510 { 00:23:46.510 "name": null, 00:23:46.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.511 "is_configured": false, 00:23:46.511 "data_offset": 2048, 00:23:46.511 "data_size": 63488 00:23:46.511 }, 00:23:46.511 { 00:23:46.511 "name": "BaseBdev3", 00:23:46.511 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:46.511 "is_configured": true, 00:23:46.511 "data_offset": 2048, 00:23:46.511 "data_size": 63488 00:23:46.511 }, 00:23:46.511 { 00:23:46.511 "name": "BaseBdev4", 00:23:46.511 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:46.511 "is_configured": true, 00:23:46.511 "data_offset": 2048, 00:23:46.511 "data_size": 63488 00:23:46.511 } 00:23:46.511 ] 00:23:46.511 }' 00:23:46.511 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.769 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:47.027 "name": "raid_bdev1", 00:23:47.027 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:47.027 "strip_size_kb": 0, 00:23:47.027 "state": "online", 00:23:47.027 "raid_level": "raid1", 00:23:47.027 "superblock": true, 00:23:47.027 "num_base_bdevs": 4, 00:23:47.027 "num_base_bdevs_discovered": 3, 00:23:47.027 "num_base_bdevs_operational": 3, 00:23:47.027 "base_bdevs_list": [ 00:23:47.027 { 00:23:47.027 "name": "spare", 00:23:47.027 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:47.027 "is_configured": true, 00:23:47.027 "data_offset": 2048, 00:23:47.027 "data_size": 63488 00:23:47.027 }, 00:23:47.027 { 00:23:47.027 "name": null, 00:23:47.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.027 "is_configured": false, 00:23:47.027 "data_offset": 2048, 00:23:47.027 "data_size": 63488 00:23:47.027 }, 00:23:47.027 { 00:23:47.027 "name": "BaseBdev3", 00:23:47.027 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:47.027 "is_configured": true, 00:23:47.027 "data_offset": 2048, 00:23:47.027 "data_size": 63488 00:23:47.027 }, 00:23:47.027 { 00:23:47.027 "name": "BaseBdev4", 00:23:47.027 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:47.027 "is_configured": true, 00:23:47.027 "data_offset": 2048, 00:23:47.027 "data_size": 63488 00:23:47.027 } 00:23:47.027 ] 00:23:47.027 }' 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.027 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.028 11:35:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.594 11:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:47.594 "name": "raid_bdev1", 00:23:47.594 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:47.594 "strip_size_kb": 0, 00:23:47.594 "state": "online", 00:23:47.594 "raid_level": "raid1", 00:23:47.594 "superblock": true, 00:23:47.594 "num_base_bdevs": 4, 00:23:47.594 "num_base_bdevs_discovered": 3, 00:23:47.594 "num_base_bdevs_operational": 3, 00:23:47.594 "base_bdevs_list": [ 00:23:47.594 { 00:23:47.594 "name": "spare", 00:23:47.594 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:47.594 "is_configured": true, 00:23:47.594 "data_offset": 2048, 00:23:47.594 "data_size": 63488 00:23:47.594 }, 00:23:47.594 { 00:23:47.594 "name": null, 00:23:47.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.594 "is_configured": false, 00:23:47.594 "data_offset": 2048, 00:23:47.594 "data_size": 63488 00:23:47.594 }, 00:23:47.594 { 00:23:47.594 "name": "BaseBdev3", 00:23:47.594 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:47.594 "is_configured": true, 00:23:47.594 "data_offset": 2048, 00:23:47.594 "data_size": 63488 00:23:47.594 }, 00:23:47.594 { 00:23:47.594 "name": "BaseBdev4", 00:23:47.594 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:47.594 "is_configured": true, 00:23:47.594 "data_offset": 2048, 00:23:47.594 "data_size": 63488 00:23:47.594 } 00:23:47.594 ] 00:23:47.594 }' 00:23:47.594 11:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:47.594 11:35:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.160 11:35:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:48.417 [2024-07-25 11:35:04.145224] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:48.417 [2024-07-25 11:35:04.145293] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:48.417 [2024-07-25 11:35:04.145402] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.417 [2024-07-25 11:35:04.145502] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.417 [2024-07-25 11:35:04.145523] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:48.417 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.417 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:48.676 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:48.934 /dev/nbd0 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:48.934 1+0 records in 00:23:48.934 1+0 records out 00:23:48.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349821 s, 11.7 MB/s 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:48.934 11:35:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:49.193 /dev/nbd1 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:49.193 1+0 records in 00:23:49.193 1+0 records out 00:23:49.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483433 s, 8.5 MB/s 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:49.193 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:49.451 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:49.451 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:49.451 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:49.451 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:49.451 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:49.451 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:49.451 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:49.709 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:49.709 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:49.709 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:49.709 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:49.709 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:49.709 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:49.709 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:49.709 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:49.709 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:49.709 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:49.967 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:49.967 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:49.967 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:49.967 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:49.967 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:49.967 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:49.967 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:49.967 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:49.967 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:23:49.967 11:35:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:50.225 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:50.482 [2024-07-25 11:35:06.354442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:50.482 [2024-07-25 11:35:06.354542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:50.482 [2024-07-25 11:35:06.354575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:50.482 [2024-07-25 11:35:06.354593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:50.482 [2024-07-25 11:35:06.357422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:50.482 [2024-07-25 11:35:06.357477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:50.482 [2024-07-25 11:35:06.357603] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:50.482 [2024-07-25 11:35:06.357697] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:50.482 [2024-07-25 11:35:06.357892] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:50.482 [2024-07-25 11:35:06.358065] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:50.482 spare 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.741 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.741 [2024-07-25 11:35:06.458203] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:50.741 [2024-07-25 11:35:06.458285] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:50.741 [2024-07-25 11:35:06.458798] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:23:50.741 [2024-07-25 11:35:06.459077] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:50.741 [2024-07-25 11:35:06.459104] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:50.741 [2024-07-25 11:35:06.459356] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.999 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:50.999 "name": "raid_bdev1", 00:23:50.999 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:50.999 "strip_size_kb": 0, 00:23:50.999 "state": "online", 00:23:50.999 "raid_level": "raid1", 00:23:50.999 "superblock": true, 00:23:50.999 "num_base_bdevs": 4, 00:23:50.999 "num_base_bdevs_discovered": 3, 00:23:50.999 "num_base_bdevs_operational": 3, 00:23:50.999 "base_bdevs_list": [ 00:23:50.999 { 00:23:50.999 "name": "spare", 00:23:50.999 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:50.999 "is_configured": true, 00:23:50.999 "data_offset": 2048, 00:23:50.999 "data_size": 63488 00:23:50.999 }, 00:23:50.999 { 00:23:50.999 "name": null, 00:23:50.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.999 "is_configured": false, 00:23:50.999 "data_offset": 2048, 00:23:50.999 "data_size": 63488 00:23:50.999 }, 00:23:50.999 { 00:23:50.999 "name": "BaseBdev3", 00:23:50.999 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:50.999 "is_configured": true, 00:23:50.999 "data_offset": 2048, 00:23:50.999 "data_size": 63488 00:23:50.999 }, 00:23:50.999 { 00:23:50.999 "name": "BaseBdev4", 00:23:50.999 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:50.999 "is_configured": true, 00:23:50.999 "data_offset": 2048, 00:23:50.999 "data_size": 63488 00:23:50.999 } 00:23:50.999 ] 00:23:50.999 }' 00:23:50.999 11:35:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:50.999 11:35:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:51.930 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.930 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:51.930 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:51.930 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:51.930 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:51.930 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.930 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.187 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:52.187 "name": "raid_bdev1", 00:23:52.187 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:52.187 "strip_size_kb": 0, 00:23:52.187 "state": "online", 00:23:52.187 "raid_level": "raid1", 00:23:52.187 "superblock": true, 00:23:52.187 "num_base_bdevs": 4, 00:23:52.187 "num_base_bdevs_discovered": 3, 00:23:52.187 "num_base_bdevs_operational": 3, 00:23:52.187 "base_bdevs_list": [ 00:23:52.187 { 00:23:52.187 "name": "spare", 00:23:52.187 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:52.188 "is_configured": true, 00:23:52.188 "data_offset": 2048, 00:23:52.188 "data_size": 63488 00:23:52.188 }, 00:23:52.188 { 00:23:52.188 "name": null, 00:23:52.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.188 "is_configured": false, 00:23:52.188 "data_offset": 2048, 00:23:52.188 "data_size": 63488 00:23:52.188 }, 00:23:52.188 { 00:23:52.188 "name": "BaseBdev3", 00:23:52.188 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:52.188 "is_configured": true, 00:23:52.188 "data_offset": 2048, 00:23:52.188 "data_size": 63488 00:23:52.188 }, 00:23:52.188 { 00:23:52.188 "name": "BaseBdev4", 00:23:52.188 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:52.188 "is_configured": true, 00:23:52.188 "data_offset": 2048, 00:23:52.188 "data_size": 63488 00:23:52.188 } 00:23:52.188 ] 00:23:52.188 }' 00:23:52.188 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:52.188 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:52.188 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:52.188 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:52.188 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.188 11:35:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:52.445 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.445 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:52.703 [2024-07-25 11:35:08.524034] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.703 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.960 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.960 "name": "raid_bdev1", 00:23:52.960 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:52.960 "strip_size_kb": 0, 00:23:52.960 "state": "online", 00:23:52.960 "raid_level": "raid1", 00:23:52.960 "superblock": true, 00:23:52.960 "num_base_bdevs": 4, 00:23:52.960 "num_base_bdevs_discovered": 2, 00:23:52.960 "num_base_bdevs_operational": 2, 00:23:52.960 "base_bdevs_list": [ 00:23:52.960 { 00:23:52.960 "name": null, 00:23:52.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.960 "is_configured": false, 00:23:52.960 "data_offset": 2048, 00:23:52.960 "data_size": 63488 00:23:52.960 }, 00:23:52.960 { 00:23:52.960 "name": null, 00:23:52.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.960 "is_configured": false, 00:23:52.960 "data_offset": 2048, 00:23:52.960 "data_size": 63488 00:23:52.960 }, 00:23:52.960 { 00:23:52.960 "name": "BaseBdev3", 00:23:52.960 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:52.960 "is_configured": true, 00:23:52.960 "data_offset": 2048, 00:23:52.960 "data_size": 63488 00:23:52.960 }, 00:23:52.960 { 00:23:52.960 "name": "BaseBdev4", 00:23:52.960 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:52.960 "is_configured": true, 00:23:52.960 "data_offset": 2048, 00:23:52.960 "data_size": 63488 00:23:52.960 } 00:23:52.960 ] 00:23:52.960 }' 00:23:52.960 11:35:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.960 11:35:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:53.596 11:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:54.161 [2024-07-25 11:35:09.744458] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:54.161 [2024-07-25 11:35:09.744823] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:23:54.161 [2024-07-25 11:35:09.744883] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:54.161 [2024-07-25 11:35:09.744950] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:54.161 [2024-07-25 11:35:09.759316] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:23:54.161 [2024-07-25 11:35:09.761751] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:54.161 11:35:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:23:55.094 11:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.094 11:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:55.094 11:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:55.094 11:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:55.094 11:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:55.094 11:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.094 11:35:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.351 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:55.351 "name": "raid_bdev1", 00:23:55.351 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:55.351 "strip_size_kb": 0, 00:23:55.351 "state": "online", 00:23:55.351 "raid_level": "raid1", 00:23:55.351 "superblock": true, 00:23:55.351 "num_base_bdevs": 4, 00:23:55.351 "num_base_bdevs_discovered": 3, 00:23:55.351 "num_base_bdevs_operational": 3, 00:23:55.351 "process": { 00:23:55.351 "type": "rebuild", 00:23:55.351 "target": "spare", 00:23:55.351 "progress": { 00:23:55.351 "blocks": 24576, 00:23:55.351 "percent": 38 00:23:55.351 } 00:23:55.351 }, 00:23:55.351 "base_bdevs_list": [ 00:23:55.351 { 00:23:55.351 "name": "spare", 00:23:55.351 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:55.351 "is_configured": true, 00:23:55.351 "data_offset": 2048, 00:23:55.351 "data_size": 63488 00:23:55.351 }, 00:23:55.351 { 00:23:55.351 "name": null, 00:23:55.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.351 "is_configured": false, 00:23:55.351 "data_offset": 2048, 00:23:55.351 "data_size": 63488 00:23:55.351 }, 00:23:55.351 { 00:23:55.351 "name": "BaseBdev3", 00:23:55.351 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:55.351 "is_configured": true, 00:23:55.351 "data_offset": 2048, 00:23:55.351 "data_size": 63488 00:23:55.351 }, 00:23:55.351 { 00:23:55.351 "name": "BaseBdev4", 00:23:55.351 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:55.351 "is_configured": true, 00:23:55.351 "data_offset": 2048, 00:23:55.351 "data_size": 63488 00:23:55.351 } 00:23:55.351 ] 00:23:55.351 }' 00:23:55.351 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:55.351 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.351 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:55.351 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.351 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:55.609 [2024-07-25 11:35:11.423873] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:55.609 [2024-07-25 11:35:11.474386] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:55.609 [2024-07-25 11:35:11.474463] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.609 [2024-07-25 11:35:11.474487] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:55.609 [2024-07-25 11:35:11.474501] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.866 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.123 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:56.123 "name": "raid_bdev1", 00:23:56.123 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:56.123 "strip_size_kb": 0, 00:23:56.123 "state": "online", 00:23:56.123 "raid_level": "raid1", 00:23:56.123 "superblock": true, 00:23:56.123 "num_base_bdevs": 4, 00:23:56.123 "num_base_bdevs_discovered": 2, 00:23:56.123 "num_base_bdevs_operational": 2, 00:23:56.123 "base_bdevs_list": [ 00:23:56.123 { 00:23:56.123 "name": null, 00:23:56.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.123 "is_configured": false, 00:23:56.123 "data_offset": 2048, 00:23:56.123 "data_size": 63488 00:23:56.123 }, 00:23:56.123 { 00:23:56.123 "name": null, 00:23:56.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.123 "is_configured": false, 00:23:56.123 "data_offset": 2048, 00:23:56.123 "data_size": 63488 00:23:56.123 }, 00:23:56.123 { 00:23:56.123 "name": "BaseBdev3", 00:23:56.123 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:56.123 "is_configured": true, 00:23:56.123 "data_offset": 2048, 00:23:56.123 "data_size": 63488 00:23:56.123 }, 00:23:56.123 { 00:23:56.123 "name": "BaseBdev4", 00:23:56.123 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:56.123 "is_configured": true, 00:23:56.123 "data_offset": 2048, 00:23:56.123 "data_size": 63488 00:23:56.123 } 00:23:56.123 ] 00:23:56.123 }' 00:23:56.123 11:35:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:56.123 11:35:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.688 11:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:56.946 [2024-07-25 11:35:12.649563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:56.946 [2024-07-25 11:35:12.649677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.946 [2024-07-25 11:35:12.649731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:56.946 [2024-07-25 11:35:12.649750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.946 [2024-07-25 11:35:12.650401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.946 [2024-07-25 11:35:12.650443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:56.946 [2024-07-25 11:35:12.650561] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:56.946 [2024-07-25 11:35:12.650592] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:23:56.946 [2024-07-25 11:35:12.650606] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:56.946 [2024-07-25 11:35:12.650665] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:56.946 [2024-07-25 11:35:12.663283] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:23:56.946 spare 00:23:56.946 [2024-07-25 11:35:12.665797] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:56.946 11:35:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:23:57.880 11:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.880 11:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:57.880 11:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:57.880 11:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:57.880 11:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:57.880 11:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.880 11:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.138 11:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:58.138 "name": "raid_bdev1", 00:23:58.138 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:58.138 "strip_size_kb": 0, 00:23:58.138 "state": "online", 00:23:58.138 "raid_level": "raid1", 00:23:58.138 "superblock": true, 00:23:58.138 "num_base_bdevs": 4, 00:23:58.138 "num_base_bdevs_discovered": 3, 00:23:58.138 "num_base_bdevs_operational": 3, 00:23:58.138 "process": { 00:23:58.138 "type": "rebuild", 00:23:58.138 "target": "spare", 00:23:58.138 "progress": { 00:23:58.138 "blocks": 24576, 00:23:58.138 "percent": 38 00:23:58.138 } 00:23:58.138 }, 00:23:58.138 "base_bdevs_list": [ 00:23:58.138 { 00:23:58.138 "name": "spare", 00:23:58.138 "uuid": "396fc991-cb95-5cd2-bc29-1edef629c6b1", 00:23:58.138 "is_configured": true, 00:23:58.138 "data_offset": 2048, 00:23:58.138 "data_size": 63488 00:23:58.138 }, 00:23:58.138 { 00:23:58.138 "name": null, 00:23:58.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.138 "is_configured": false, 00:23:58.138 "data_offset": 2048, 00:23:58.138 "data_size": 63488 00:23:58.138 }, 00:23:58.138 { 00:23:58.138 "name": "BaseBdev3", 00:23:58.138 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:58.138 "is_configured": true, 00:23:58.138 "data_offset": 2048, 00:23:58.138 "data_size": 63488 00:23:58.138 }, 00:23:58.138 { 00:23:58.138 "name": "BaseBdev4", 00:23:58.138 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:58.138 "is_configured": true, 00:23:58.138 "data_offset": 2048, 00:23:58.138 "data_size": 63488 00:23:58.138 } 00:23:58.138 ] 00:23:58.138 }' 00:23:58.138 11:35:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:58.138 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:58.138 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:58.396 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:58.396 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:58.654 [2024-07-25 11:35:14.319931] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:58.654 [2024-07-25 11:35:14.378530] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:58.654 [2024-07-25 11:35:14.378650] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.654 [2024-07-25 11:35:14.378693] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:58.654 [2024-07-25 11:35:14.378704] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:58.654 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:58.654 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:58.654 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:58.654 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:58.654 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:58.655 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:58.655 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.655 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.655 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.655 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.655 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.655 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.914 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.914 "name": "raid_bdev1", 00:23:58.914 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:58.914 "strip_size_kb": 0, 00:23:58.914 "state": "online", 00:23:58.914 "raid_level": "raid1", 00:23:58.914 "superblock": true, 00:23:58.914 "num_base_bdevs": 4, 00:23:58.914 "num_base_bdevs_discovered": 2, 00:23:58.914 "num_base_bdevs_operational": 2, 00:23:58.914 "base_bdevs_list": [ 00:23:58.914 { 00:23:58.914 "name": null, 00:23:58.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.914 "is_configured": false, 00:23:58.914 "data_offset": 2048, 00:23:58.914 "data_size": 63488 00:23:58.914 }, 00:23:58.914 { 00:23:58.914 "name": null, 00:23:58.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.914 "is_configured": false, 00:23:58.914 "data_offset": 2048, 00:23:58.914 "data_size": 63488 00:23:58.914 }, 00:23:58.914 { 00:23:58.914 "name": "BaseBdev3", 00:23:58.914 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:58.914 "is_configured": true, 00:23:58.914 "data_offset": 2048, 00:23:58.914 "data_size": 63488 00:23:58.914 }, 00:23:58.914 { 00:23:58.914 "name": "BaseBdev4", 00:23:58.914 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:58.914 "is_configured": true, 00:23:58.914 "data_offset": 2048, 00:23:58.914 "data_size": 63488 00:23:58.914 } 00:23:58.914 ] 00:23:58.914 }' 00:23:58.914 11:35:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.914 11:35:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.480 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:59.480 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:59.480 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:59.480 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:59.480 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:59.480 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.480 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.738 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:59.738 "name": "raid_bdev1", 00:23:59.738 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:23:59.738 "strip_size_kb": 0, 00:23:59.738 "state": "online", 00:23:59.738 "raid_level": "raid1", 00:23:59.738 "superblock": true, 00:23:59.738 "num_base_bdevs": 4, 00:23:59.738 "num_base_bdevs_discovered": 2, 00:23:59.738 "num_base_bdevs_operational": 2, 00:23:59.738 "base_bdevs_list": [ 00:23:59.738 { 00:23:59.738 "name": null, 00:23:59.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.738 "is_configured": false, 00:23:59.738 "data_offset": 2048, 00:23:59.738 "data_size": 63488 00:23:59.738 }, 00:23:59.738 { 00:23:59.738 "name": null, 00:23:59.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.738 "is_configured": false, 00:23:59.738 "data_offset": 2048, 00:23:59.738 "data_size": 63488 00:23:59.738 }, 00:23:59.738 { 00:23:59.738 "name": "BaseBdev3", 00:23:59.738 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:23:59.738 "is_configured": true, 00:23:59.738 "data_offset": 2048, 00:23:59.738 "data_size": 63488 00:23:59.738 }, 00:23:59.738 { 00:23:59.738 "name": "BaseBdev4", 00:23:59.738 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:23:59.738 "is_configured": true, 00:23:59.738 "data_offset": 2048, 00:23:59.738 "data_size": 63488 00:23:59.738 } 00:23:59.738 ] 00:23:59.738 }' 00:23:59.738 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:59.738 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:59.738 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:59.995 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:59.995 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:00.252 11:35:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:00.510 [2024-07-25 11:35:16.177710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:00.510 [2024-07-25 11:35:16.177792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.510 [2024-07-25 11:35:16.177828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:24:00.510 [2024-07-25 11:35:16.177843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.510 [2024-07-25 11:35:16.178395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.510 [2024-07-25 11:35:16.178430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:00.510 [2024-07-25 11:35:16.178536] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:00.510 [2024-07-25 11:35:16.178558] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:00.510 [2024-07-25 11:35:16.178581] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:00.510 BaseBdev1 00:24:00.510 11:35:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.524 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.784 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:01.784 "name": "raid_bdev1", 00:24:01.784 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:24:01.784 "strip_size_kb": 0, 00:24:01.784 "state": "online", 00:24:01.784 "raid_level": "raid1", 00:24:01.784 "superblock": true, 00:24:01.784 "num_base_bdevs": 4, 00:24:01.784 "num_base_bdevs_discovered": 2, 00:24:01.784 "num_base_bdevs_operational": 2, 00:24:01.784 "base_bdevs_list": [ 00:24:01.784 { 00:24:01.784 "name": null, 00:24:01.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.784 "is_configured": false, 00:24:01.784 "data_offset": 2048, 00:24:01.784 "data_size": 63488 00:24:01.784 }, 00:24:01.784 { 00:24:01.784 "name": null, 00:24:01.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.784 "is_configured": false, 00:24:01.784 "data_offset": 2048, 00:24:01.784 "data_size": 63488 00:24:01.784 }, 00:24:01.784 { 00:24:01.784 "name": "BaseBdev3", 00:24:01.784 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:24:01.784 "is_configured": true, 00:24:01.784 "data_offset": 2048, 00:24:01.784 "data_size": 63488 00:24:01.784 }, 00:24:01.784 { 00:24:01.784 "name": "BaseBdev4", 00:24:01.784 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:24:01.784 "is_configured": true, 00:24:01.784 "data_offset": 2048, 00:24:01.784 "data_size": 63488 00:24:01.784 } 00:24:01.784 ] 00:24:01.784 }' 00:24:01.784 11:35:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:01.784 11:35:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:02.350 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:02.350 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:02.350 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:02.350 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:02.350 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:02.350 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.350 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.608 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:02.608 "name": "raid_bdev1", 00:24:02.608 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:24:02.608 "strip_size_kb": 0, 00:24:02.608 "state": "online", 00:24:02.608 "raid_level": "raid1", 00:24:02.608 "superblock": true, 00:24:02.608 "num_base_bdevs": 4, 00:24:02.608 "num_base_bdevs_discovered": 2, 00:24:02.608 "num_base_bdevs_operational": 2, 00:24:02.608 "base_bdevs_list": [ 00:24:02.608 { 00:24:02.608 "name": null, 00:24:02.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.608 "is_configured": false, 00:24:02.608 "data_offset": 2048, 00:24:02.608 "data_size": 63488 00:24:02.608 }, 00:24:02.608 { 00:24:02.608 "name": null, 00:24:02.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.608 "is_configured": false, 00:24:02.608 "data_offset": 2048, 00:24:02.608 "data_size": 63488 00:24:02.608 }, 00:24:02.608 { 00:24:02.608 "name": "BaseBdev3", 00:24:02.608 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:24:02.608 "is_configured": true, 00:24:02.608 "data_offset": 2048, 00:24:02.608 "data_size": 63488 00:24:02.608 }, 00:24:02.608 { 00:24:02.608 "name": "BaseBdev4", 00:24:02.608 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:24:02.608 "is_configured": true, 00:24:02.608 "data_offset": 2048, 00:24:02.608 "data_size": 63488 00:24:02.608 } 00:24:02.608 ] 00:24:02.608 }' 00:24:02.608 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:02.868 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:03.126 [2024-07-25 11:35:18.794570] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:03.126 [2024-07-25 11:35:18.794846] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:03.126 [2024-07-25 11:35:18.794868] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:03.126 request: 00:24:03.126 { 00:24:03.126 "base_bdev": "BaseBdev1", 00:24:03.126 "raid_bdev": "raid_bdev1", 00:24:03.126 "method": "bdev_raid_add_base_bdev", 00:24:03.126 "req_id": 1 00:24:03.126 } 00:24:03.126 Got JSON-RPC error response 00:24:03.126 response: 00:24:03.126 { 00:24:03.126 "code": -22, 00:24:03.126 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:03.126 } 00:24:03.126 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:24:03.126 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:03.126 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:03.126 11:35:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:03.126 11:35:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.058 11:35:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.316 11:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:04.316 "name": "raid_bdev1", 00:24:04.316 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:24:04.316 "strip_size_kb": 0, 00:24:04.316 "state": "online", 00:24:04.316 "raid_level": "raid1", 00:24:04.316 "superblock": true, 00:24:04.316 "num_base_bdevs": 4, 00:24:04.316 "num_base_bdevs_discovered": 2, 00:24:04.316 "num_base_bdevs_operational": 2, 00:24:04.316 "base_bdevs_list": [ 00:24:04.316 { 00:24:04.316 "name": null, 00:24:04.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.316 "is_configured": false, 00:24:04.316 "data_offset": 2048, 00:24:04.316 "data_size": 63488 00:24:04.316 }, 00:24:04.316 { 00:24:04.316 "name": null, 00:24:04.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.316 "is_configured": false, 00:24:04.316 "data_offset": 2048, 00:24:04.316 "data_size": 63488 00:24:04.316 }, 00:24:04.316 { 00:24:04.316 "name": "BaseBdev3", 00:24:04.316 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:24:04.316 "is_configured": true, 00:24:04.316 "data_offset": 2048, 00:24:04.316 "data_size": 63488 00:24:04.316 }, 00:24:04.316 { 00:24:04.316 "name": "BaseBdev4", 00:24:04.316 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:24:04.316 "is_configured": true, 00:24:04.316 "data_offset": 2048, 00:24:04.316 "data_size": 63488 00:24:04.316 } 00:24:04.316 ] 00:24:04.316 }' 00:24:04.316 11:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:04.316 11:35:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.251 11:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:05.251 11:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:05.251 11:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:05.251 11:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:05.251 11:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:05.251 11:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.251 11:35:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:05.510 "name": "raid_bdev1", 00:24:05.510 "uuid": "3e1190b5-5769-480a-938f-1f28d64eba2b", 00:24:05.510 "strip_size_kb": 0, 00:24:05.510 "state": "online", 00:24:05.510 "raid_level": "raid1", 00:24:05.510 "superblock": true, 00:24:05.510 "num_base_bdevs": 4, 00:24:05.510 "num_base_bdevs_discovered": 2, 00:24:05.510 "num_base_bdevs_operational": 2, 00:24:05.510 "base_bdevs_list": [ 00:24:05.510 { 00:24:05.510 "name": null, 00:24:05.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.510 "is_configured": false, 00:24:05.510 "data_offset": 2048, 00:24:05.510 "data_size": 63488 00:24:05.510 }, 00:24:05.510 { 00:24:05.510 "name": null, 00:24:05.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.510 "is_configured": false, 00:24:05.510 "data_offset": 2048, 00:24:05.510 "data_size": 63488 00:24:05.510 }, 00:24:05.510 { 00:24:05.510 "name": "BaseBdev3", 00:24:05.510 "uuid": "04059375-4f2c-59ed-a6db-51aa69e45441", 00:24:05.510 "is_configured": true, 00:24:05.510 "data_offset": 2048, 00:24:05.510 "data_size": 63488 00:24:05.510 }, 00:24:05.510 { 00:24:05.510 "name": "BaseBdev4", 00:24:05.510 "uuid": "012a7c33-f443-5308-be50-9a7c1f3df522", 00:24:05.510 "is_configured": true, 00:24:05.510 "data_offset": 2048, 00:24:05.510 "data_size": 63488 00:24:05.510 } 00:24:05.510 ] 00:24:05.510 }' 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 89067 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 89067 ']' 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 89067 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89067 00:24:05.510 killing process with pid 89067 00:24:05.510 Received shutdown signal, test time was about 60.000000 seconds 00:24:05.510 00:24:05.510 Latency(us) 00:24:05.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.510 =================================================================================================================== 00:24:05.510 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89067' 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 89067 00:24:05.510 [2024-07-25 11:35:21.280934] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:05.510 11:35:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 89067 00:24:05.510 [2024-07-25 11:35:21.281092] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:05.510 [2024-07-25 11:35:21.281179] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:05.510 [2024-07-25 11:35:21.281194] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:06.076 [2024-07-25 11:35:21.721610] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:24:07.452 00:24:07.452 real 0m43.333s 00:24:07.452 user 1m3.694s 00:24:07.452 sys 0m6.030s 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.452 ************************************ 00:24:07.452 END TEST raid_rebuild_test_sb 00:24:07.452 ************************************ 00:24:07.452 11:35:22 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:24:07.452 11:35:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:07.452 11:35:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:07.452 11:35:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:07.452 ************************************ 00:24:07.452 START TEST raid_rebuild_test_io 00:24:07.452 ************************************ 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=90018 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 90018 /var/tmp/spdk-raid.sock 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 90018 ']' 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:07.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:07.452 11:35:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:07.452 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:07.452 Zero copy mechanism will not be used. 00:24:07.452 [2024-07-25 11:35:23.127536] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:07.452 [2024-07-25 11:35:23.127712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90018 ] 00:24:07.452 [2024-07-25 11:35:23.295496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.710 [2024-07-25 11:35:23.561661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.967 [2024-07-25 11:35:23.764891] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:07.968 [2024-07-25 11:35:23.764970] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:08.226 11:35:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:08.226 11:35:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:24:08.226 11:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:08.226 11:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:08.484 BaseBdev1_malloc 00:24:08.484 11:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:08.742 [2024-07-25 11:35:24.507381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:08.742 [2024-07-25 11:35:24.507480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:08.742 [2024-07-25 11:35:24.507521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:08.742 [2024-07-25 11:35:24.507538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:08.742 [2024-07-25 11:35:24.510384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:08.742 [2024-07-25 11:35:24.510430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:08.742 BaseBdev1 00:24:08.742 11:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:08.742 11:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:09.000 BaseBdev2_malloc 00:24:09.000 11:35:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:09.258 [2024-07-25 11:35:25.016208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:09.258 [2024-07-25 11:35:25.016306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.258 [2024-07-25 11:35:25.016349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:09.258 [2024-07-25 11:35:25.016367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.258 [2024-07-25 11:35:25.019117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.258 [2024-07-25 11:35:25.019161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:09.258 BaseBdev2 00:24:09.258 11:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:09.258 11:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:09.515 BaseBdev3_malloc 00:24:09.515 11:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:09.773 [2024-07-25 11:35:25.572096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:09.773 [2024-07-25 11:35:25.572186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:09.773 [2024-07-25 11:35:25.572229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:09.773 [2024-07-25 11:35:25.572246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:09.773 [2024-07-25 11:35:25.575046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:09.773 [2024-07-25 11:35:25.575092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:09.773 BaseBdev3 00:24:09.773 11:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:09.773 11:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:10.031 BaseBdev4_malloc 00:24:10.031 11:35:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:10.288 [2024-07-25 11:35:26.108368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:10.288 [2024-07-25 11:35:26.108467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:10.288 [2024-07-25 11:35:26.108503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:10.288 [2024-07-25 11:35:26.108519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:10.288 [2024-07-25 11:35:26.111536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:10.288 [2024-07-25 11:35:26.111581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:10.288 BaseBdev4 00:24:10.288 11:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:10.546 spare_malloc 00:24:10.546 11:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:10.804 spare_delay 00:24:10.804 11:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:11.061 [2024-07-25 11:35:26.850481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:11.061 [2024-07-25 11:35:26.850565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.061 [2024-07-25 11:35:26.850620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:11.061 [2024-07-25 11:35:26.850658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.061 [2024-07-25 11:35:26.853590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.061 [2024-07-25 11:35:26.853648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:11.061 spare 00:24:11.061 11:35:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:11.319 [2024-07-25 11:35:27.166657] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:11.319 [2024-07-25 11:35:27.169127] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:11.319 [2024-07-25 11:35:27.169275] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:11.319 [2024-07-25 11:35:27.169380] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:11.319 [2024-07-25 11:35:27.169534] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:11.319 [2024-07-25 11:35:27.169551] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:11.319 [2024-07-25 11:35:27.170017] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:11.319 [2024-07-25 11:35:27.170243] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:11.319 [2024-07-25 11:35:27.170272] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:11.319 [2024-07-25 11:35:27.170560] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.319 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.884 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:11.884 "name": "raid_bdev1", 00:24:11.884 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:11.884 "strip_size_kb": 0, 00:24:11.884 "state": "online", 00:24:11.884 "raid_level": "raid1", 00:24:11.884 "superblock": false, 00:24:11.884 "num_base_bdevs": 4, 00:24:11.884 "num_base_bdevs_discovered": 4, 00:24:11.884 "num_base_bdevs_operational": 4, 00:24:11.884 "base_bdevs_list": [ 00:24:11.884 { 00:24:11.884 "name": "BaseBdev1", 00:24:11.884 "uuid": "81e48c80-3258-565f-9c8a-d417225c2a88", 00:24:11.884 "is_configured": true, 00:24:11.884 "data_offset": 0, 00:24:11.884 "data_size": 65536 00:24:11.884 }, 00:24:11.884 { 00:24:11.884 "name": "BaseBdev2", 00:24:11.884 "uuid": "fa657bea-0e35-5716-8ad8-c7de6ab90f66", 00:24:11.884 "is_configured": true, 00:24:11.884 "data_offset": 0, 00:24:11.884 "data_size": 65536 00:24:11.884 }, 00:24:11.884 { 00:24:11.884 "name": "BaseBdev3", 00:24:11.884 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:11.884 "is_configured": true, 00:24:11.884 "data_offset": 0, 00:24:11.884 "data_size": 65536 00:24:11.884 }, 00:24:11.884 { 00:24:11.884 "name": "BaseBdev4", 00:24:11.884 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:11.884 "is_configured": true, 00:24:11.884 "data_offset": 0, 00:24:11.884 "data_size": 65536 00:24:11.884 } 00:24:11.884 ] 00:24:11.884 }' 00:24:11.884 11:35:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:11.884 11:35:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:12.450 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:12.450 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:12.708 [2024-07-25 11:35:28.343505] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:12.708 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:24:12.708 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.708 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:12.965 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:24:12.965 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:24:12.965 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:12.965 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:12.965 [2024-07-25 11:35:28.771473] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:12.965 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:12.965 Zero copy mechanism will not be used. 00:24:12.965 Running I/O for 60 seconds... 00:24:13.223 [2024-07-25 11:35:28.873793] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:13.224 [2024-07-25 11:35:28.889623] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.224 11:35:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.482 11:35:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:13.482 "name": "raid_bdev1", 00:24:13.482 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:13.482 "strip_size_kb": 0, 00:24:13.482 "state": "online", 00:24:13.482 "raid_level": "raid1", 00:24:13.482 "superblock": false, 00:24:13.482 "num_base_bdevs": 4, 00:24:13.482 "num_base_bdevs_discovered": 3, 00:24:13.482 "num_base_bdevs_operational": 3, 00:24:13.482 "base_bdevs_list": [ 00:24:13.482 { 00:24:13.482 "name": null, 00:24:13.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.482 "is_configured": false, 00:24:13.482 "data_offset": 0, 00:24:13.482 "data_size": 65536 00:24:13.482 }, 00:24:13.482 { 00:24:13.482 "name": "BaseBdev2", 00:24:13.482 "uuid": "fa657bea-0e35-5716-8ad8-c7de6ab90f66", 00:24:13.482 "is_configured": true, 00:24:13.482 "data_offset": 0, 00:24:13.482 "data_size": 65536 00:24:13.482 }, 00:24:13.482 { 00:24:13.482 "name": "BaseBdev3", 00:24:13.482 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:13.482 "is_configured": true, 00:24:13.482 "data_offset": 0, 00:24:13.482 "data_size": 65536 00:24:13.482 }, 00:24:13.482 { 00:24:13.482 "name": "BaseBdev4", 00:24:13.482 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:13.482 "is_configured": true, 00:24:13.482 "data_offset": 0, 00:24:13.482 "data_size": 65536 00:24:13.482 } 00:24:13.482 ] 00:24:13.482 }' 00:24:13.482 11:35:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:13.482 11:35:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:14.047 11:35:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:14.304 [2024-07-25 11:35:30.064761] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:14.304 [2024-07-25 11:35:30.103108] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:14.304 11:35:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:14.304 [2024-07-25 11:35:30.105611] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:14.562 [2024-07-25 11:35:30.275963] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:14.820 [2024-07-25 11:35:30.501610] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:14.820 [2024-07-25 11:35:30.502021] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:15.078 [2024-07-25 11:35:30.858496] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:15.336 [2024-07-25 11:35:31.073293] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:15.336 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:15.336 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:15.336 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:15.336 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:15.336 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:15.336 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.336 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.593 [2024-07-25 11:35:31.306018] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:15.593 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:15.593 "name": "raid_bdev1", 00:24:15.593 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:15.593 "strip_size_kb": 0, 00:24:15.593 "state": "online", 00:24:15.593 "raid_level": "raid1", 00:24:15.593 "superblock": false, 00:24:15.593 "num_base_bdevs": 4, 00:24:15.593 "num_base_bdevs_discovered": 4, 00:24:15.593 "num_base_bdevs_operational": 4, 00:24:15.593 "process": { 00:24:15.593 "type": "rebuild", 00:24:15.593 "target": "spare", 00:24:15.593 "progress": { 00:24:15.593 "blocks": 14336, 00:24:15.593 "percent": 21 00:24:15.593 } 00:24:15.593 }, 00:24:15.593 "base_bdevs_list": [ 00:24:15.594 { 00:24:15.594 "name": "spare", 00:24:15.594 "uuid": "7cc00538-7d21-5afe-85af-2c49a2953eb4", 00:24:15.594 "is_configured": true, 00:24:15.594 "data_offset": 0, 00:24:15.594 "data_size": 65536 00:24:15.594 }, 00:24:15.594 { 00:24:15.594 "name": "BaseBdev2", 00:24:15.594 "uuid": "fa657bea-0e35-5716-8ad8-c7de6ab90f66", 00:24:15.594 "is_configured": true, 00:24:15.594 "data_offset": 0, 00:24:15.594 "data_size": 65536 00:24:15.594 }, 00:24:15.594 { 00:24:15.594 "name": "BaseBdev3", 00:24:15.594 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:15.594 "is_configured": true, 00:24:15.594 "data_offset": 0, 00:24:15.594 "data_size": 65536 00:24:15.594 }, 00:24:15.594 { 00:24:15.594 "name": "BaseBdev4", 00:24:15.594 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:15.594 "is_configured": true, 00:24:15.594 "data_offset": 0, 00:24:15.594 "data_size": 65536 00:24:15.594 } 00:24:15.594 ] 00:24:15.594 }' 00:24:15.594 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:15.594 [2024-07-25 11:35:31.436768] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:15.594 [2024-07-25 11:35:31.437691] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:15.594 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.594 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:15.851 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.851 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:15.851 [2024-07-25 11:35:31.730523] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.109 [2024-07-25 11:35:31.897361] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:16.109 [2024-07-25 11:35:31.911018] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.109 [2024-07-25 11:35:31.911082] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.109 [2024-07-25 11:35:31.911111] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:16.109 [2024-07-25 11:35:31.934716] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.109 11:35:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.367 11:35:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.367 "name": "raid_bdev1", 00:24:16.367 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:16.367 "strip_size_kb": 0, 00:24:16.367 "state": "online", 00:24:16.367 "raid_level": "raid1", 00:24:16.367 "superblock": false, 00:24:16.367 "num_base_bdevs": 4, 00:24:16.367 "num_base_bdevs_discovered": 3, 00:24:16.367 "num_base_bdevs_operational": 3, 00:24:16.367 "base_bdevs_list": [ 00:24:16.367 { 00:24:16.367 "name": null, 00:24:16.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.367 "is_configured": false, 00:24:16.367 "data_offset": 0, 00:24:16.367 "data_size": 65536 00:24:16.367 }, 00:24:16.367 { 00:24:16.367 "name": "BaseBdev2", 00:24:16.367 "uuid": "fa657bea-0e35-5716-8ad8-c7de6ab90f66", 00:24:16.367 "is_configured": true, 00:24:16.367 "data_offset": 0, 00:24:16.367 "data_size": 65536 00:24:16.367 }, 00:24:16.367 { 00:24:16.367 "name": "BaseBdev3", 00:24:16.367 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:16.367 "is_configured": true, 00:24:16.367 "data_offset": 0, 00:24:16.367 "data_size": 65536 00:24:16.367 }, 00:24:16.367 { 00:24:16.367 "name": "BaseBdev4", 00:24:16.367 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:16.367 "is_configured": true, 00:24:16.367 "data_offset": 0, 00:24:16.367 "data_size": 65536 00:24:16.367 } 00:24:16.367 ] 00:24:16.367 }' 00:24:16.367 11:35:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.367 11:35:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:17.300 11:35:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:17.301 11:35:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:17.301 11:35:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:17.301 11:35:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:17.301 11:35:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:17.301 11:35:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.301 11:35:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.566 11:35:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:17.566 "name": "raid_bdev1", 00:24:17.566 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:17.566 "strip_size_kb": 0, 00:24:17.566 "state": "online", 00:24:17.566 "raid_level": "raid1", 00:24:17.566 "superblock": false, 00:24:17.566 "num_base_bdevs": 4, 00:24:17.566 "num_base_bdevs_discovered": 3, 00:24:17.566 "num_base_bdevs_operational": 3, 00:24:17.566 "base_bdevs_list": [ 00:24:17.566 { 00:24:17.566 "name": null, 00:24:17.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.566 "is_configured": false, 00:24:17.566 "data_offset": 0, 00:24:17.566 "data_size": 65536 00:24:17.566 }, 00:24:17.566 { 00:24:17.566 "name": "BaseBdev2", 00:24:17.566 "uuid": "fa657bea-0e35-5716-8ad8-c7de6ab90f66", 00:24:17.566 "is_configured": true, 00:24:17.566 "data_offset": 0, 00:24:17.566 "data_size": 65536 00:24:17.566 }, 00:24:17.566 { 00:24:17.566 "name": "BaseBdev3", 00:24:17.566 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:17.566 "is_configured": true, 00:24:17.566 "data_offset": 0, 00:24:17.566 "data_size": 65536 00:24:17.566 }, 00:24:17.566 { 00:24:17.566 "name": "BaseBdev4", 00:24:17.566 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:17.566 "is_configured": true, 00:24:17.566 "data_offset": 0, 00:24:17.566 "data_size": 65536 00:24:17.566 } 00:24:17.566 ] 00:24:17.566 }' 00:24:17.566 11:35:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:17.566 11:35:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:17.566 11:35:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:17.566 11:35:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:17.566 11:35:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:17.826 [2024-07-25 11:35:33.602942] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.826 [2024-07-25 11:35:33.669976] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:17.826 [2024-07-25 11:35:33.672555] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:17.826 11:35:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:24:18.083 [2024-07-25 11:35:33.774649] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:18.083 [2024-07-25 11:35:33.775403] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:18.083 [2024-07-25 11:35:33.899166] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:18.083 [2024-07-25 11:35:33.900008] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:18.398 [2024-07-25 11:35:34.250530] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:18.657 [2024-07-25 11:35:34.360054] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:18.915 11:35:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.915 11:35:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:18.915 11:35:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:18.915 11:35:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:18.915 11:35:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:18.915 11:35:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.915 11:35:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.915 [2024-07-25 11:35:34.728704] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:18.915 [2024-07-25 11:35:34.729554] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:19.173 [2024-07-25 11:35:34.963689] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:19.173 [2024-07-25 11:35:34.964536] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:19.173 11:35:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:19.173 "name": "raid_bdev1", 00:24:19.173 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:19.173 "strip_size_kb": 0, 00:24:19.173 "state": "online", 00:24:19.173 "raid_level": "raid1", 00:24:19.173 "superblock": false, 00:24:19.173 "num_base_bdevs": 4, 00:24:19.173 "num_base_bdevs_discovered": 4, 00:24:19.173 "num_base_bdevs_operational": 4, 00:24:19.173 "process": { 00:24:19.173 "type": "rebuild", 00:24:19.173 "target": "spare", 00:24:19.173 "progress": { 00:24:19.173 "blocks": 14336, 00:24:19.173 "percent": 21 00:24:19.173 } 00:24:19.173 }, 00:24:19.173 "base_bdevs_list": [ 00:24:19.173 { 00:24:19.173 "name": "spare", 00:24:19.173 "uuid": "7cc00538-7d21-5afe-85af-2c49a2953eb4", 00:24:19.173 "is_configured": true, 00:24:19.173 "data_offset": 0, 00:24:19.173 "data_size": 65536 00:24:19.173 }, 00:24:19.173 { 00:24:19.173 "name": "BaseBdev2", 00:24:19.173 "uuid": "fa657bea-0e35-5716-8ad8-c7de6ab90f66", 00:24:19.173 "is_configured": true, 00:24:19.173 "data_offset": 0, 00:24:19.173 "data_size": 65536 00:24:19.173 }, 00:24:19.173 { 00:24:19.173 "name": "BaseBdev3", 00:24:19.173 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:19.173 "is_configured": true, 00:24:19.173 "data_offset": 0, 00:24:19.173 "data_size": 65536 00:24:19.173 }, 00:24:19.173 { 00:24:19.173 "name": "BaseBdev4", 00:24:19.173 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:19.173 "is_configured": true, 00:24:19.173 "data_offset": 0, 00:24:19.173 "data_size": 65536 00:24:19.173 } 00:24:19.173 ] 00:24:19.173 }' 00:24:19.173 11:35:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:19.173 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:19.173 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:19.431 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:19.431 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:24:19.431 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:24:19.431 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:24:19.431 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:24:19.431 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:19.740 [2024-07-25 11:35:35.342541] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:19.740 [2024-07-25 11:35:35.442698] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:24:19.740 [2024-07-25 11:35:35.442754] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:24:19.740 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:24:19.740 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:24:19.740 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:19.740 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:19.740 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:19.740 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:19.740 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:19.740 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.740 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.740 [2024-07-25 11:35:35.574018] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:19.740 [2024-07-25 11:35:35.574388] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:20.005 "name": "raid_bdev1", 00:24:20.005 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:20.005 "strip_size_kb": 0, 00:24:20.005 "state": "online", 00:24:20.005 "raid_level": "raid1", 00:24:20.005 "superblock": false, 00:24:20.005 "num_base_bdevs": 4, 00:24:20.005 "num_base_bdevs_discovered": 3, 00:24:20.005 "num_base_bdevs_operational": 3, 00:24:20.005 "process": { 00:24:20.005 "type": "rebuild", 00:24:20.005 "target": "spare", 00:24:20.005 "progress": { 00:24:20.005 "blocks": 22528, 00:24:20.005 "percent": 34 00:24:20.005 } 00:24:20.005 }, 00:24:20.005 "base_bdevs_list": [ 00:24:20.005 { 00:24:20.005 "name": "spare", 00:24:20.005 "uuid": "7cc00538-7d21-5afe-85af-2c49a2953eb4", 00:24:20.005 "is_configured": true, 00:24:20.005 "data_offset": 0, 00:24:20.005 "data_size": 65536 00:24:20.005 }, 00:24:20.005 { 00:24:20.005 "name": null, 00:24:20.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.005 "is_configured": false, 00:24:20.005 "data_offset": 0, 00:24:20.005 "data_size": 65536 00:24:20.005 }, 00:24:20.005 { 00:24:20.005 "name": "BaseBdev3", 00:24:20.005 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:20.005 "is_configured": true, 00:24:20.005 "data_offset": 0, 00:24:20.005 "data_size": 65536 00:24:20.005 }, 00:24:20.005 { 00:24:20.005 "name": "BaseBdev4", 00:24:20.005 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:20.005 "is_configured": true, 00:24:20.005 "data_offset": 0, 00:24:20.005 "data_size": 65536 00:24:20.005 } 00:24:20.005 ] 00:24:20.005 }' 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=1099 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.005 11:35:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.263 [2024-07-25 11:35:35.938524] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:20.522 [2024-07-25 11:35:36.152810] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:20.522 11:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:20.522 "name": "raid_bdev1", 00:24:20.522 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:20.522 "strip_size_kb": 0, 00:24:20.522 "state": "online", 00:24:20.522 "raid_level": "raid1", 00:24:20.522 "superblock": false, 00:24:20.522 "num_base_bdevs": 4, 00:24:20.522 "num_base_bdevs_discovered": 3, 00:24:20.522 "num_base_bdevs_operational": 3, 00:24:20.522 "process": { 00:24:20.522 "type": "rebuild", 00:24:20.522 "target": "spare", 00:24:20.522 "progress": { 00:24:20.522 "blocks": 26624, 00:24:20.522 "percent": 40 00:24:20.522 } 00:24:20.522 }, 00:24:20.522 "base_bdevs_list": [ 00:24:20.522 { 00:24:20.522 "name": "spare", 00:24:20.522 "uuid": "7cc00538-7d21-5afe-85af-2c49a2953eb4", 00:24:20.522 "is_configured": true, 00:24:20.522 "data_offset": 0, 00:24:20.522 "data_size": 65536 00:24:20.522 }, 00:24:20.522 { 00:24:20.522 "name": null, 00:24:20.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.522 "is_configured": false, 00:24:20.522 "data_offset": 0, 00:24:20.522 "data_size": 65536 00:24:20.522 }, 00:24:20.522 { 00:24:20.522 "name": "BaseBdev3", 00:24:20.522 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:20.522 "is_configured": true, 00:24:20.522 "data_offset": 0, 00:24:20.522 "data_size": 65536 00:24:20.522 }, 00:24:20.522 { 00:24:20.522 "name": "BaseBdev4", 00:24:20.522 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:20.522 "is_configured": true, 00:24:20.522 "data_offset": 0, 00:24:20.522 "data_size": 65536 00:24:20.522 } 00:24:20.522 ] 00:24:20.522 }' 00:24:20.522 11:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:20.523 11:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:20.523 11:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:20.523 11:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:20.523 11:35:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:21.090 [2024-07-25 11:35:36.773463] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:21.348 [2024-07-25 11:35:37.111559] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:21.607 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:21.607 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.607 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:21.607 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:21.607 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:21.607 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:21.607 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.607 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.607 [2024-07-25 11:35:37.324471] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:21.865 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:21.865 "name": "raid_bdev1", 00:24:21.865 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:21.865 "strip_size_kb": 0, 00:24:21.865 "state": "online", 00:24:21.865 "raid_level": "raid1", 00:24:21.865 "superblock": false, 00:24:21.865 "num_base_bdevs": 4, 00:24:21.865 "num_base_bdevs_discovered": 3, 00:24:21.865 "num_base_bdevs_operational": 3, 00:24:21.865 "process": { 00:24:21.865 "type": "rebuild", 00:24:21.865 "target": "spare", 00:24:21.865 "progress": { 00:24:21.865 "blocks": 47104, 00:24:21.865 "percent": 71 00:24:21.865 } 00:24:21.865 }, 00:24:21.865 "base_bdevs_list": [ 00:24:21.865 { 00:24:21.865 "name": "spare", 00:24:21.865 "uuid": "7cc00538-7d21-5afe-85af-2c49a2953eb4", 00:24:21.865 "is_configured": true, 00:24:21.865 "data_offset": 0, 00:24:21.865 "data_size": 65536 00:24:21.865 }, 00:24:21.865 { 00:24:21.865 "name": null, 00:24:21.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.865 "is_configured": false, 00:24:21.865 "data_offset": 0, 00:24:21.865 "data_size": 65536 00:24:21.865 }, 00:24:21.865 { 00:24:21.865 "name": "BaseBdev3", 00:24:21.865 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:21.865 "is_configured": true, 00:24:21.865 "data_offset": 0, 00:24:21.865 "data_size": 65536 00:24:21.865 }, 00:24:21.865 { 00:24:21.865 "name": "BaseBdev4", 00:24:21.865 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:21.865 "is_configured": true, 00:24:21.865 "data_offset": 0, 00:24:21.865 "data_size": 65536 00:24:21.865 } 00:24:21.865 ] 00:24:21.865 }' 00:24:21.865 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:21.865 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:21.865 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:21.865 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.865 11:35:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:22.431 [2024-07-25 11:35:38.126368] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:24:22.997 [2024-07-25 11:35:38.579499] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:22.997 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:22.997 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:22.997 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:22.997 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:22.997 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:22.997 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:22.997 [2024-07-25 11:35:38.687330] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:22.997 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.997 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.997 [2024-07-25 11:35:38.690237] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.254 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:23.254 "name": "raid_bdev1", 00:24:23.254 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:23.254 "strip_size_kb": 0, 00:24:23.254 "state": "online", 00:24:23.254 "raid_level": "raid1", 00:24:23.254 "superblock": false, 00:24:23.254 "num_base_bdevs": 4, 00:24:23.255 "num_base_bdevs_discovered": 3, 00:24:23.255 "num_base_bdevs_operational": 3, 00:24:23.255 "base_bdevs_list": [ 00:24:23.255 { 00:24:23.255 "name": "spare", 00:24:23.255 "uuid": "7cc00538-7d21-5afe-85af-2c49a2953eb4", 00:24:23.255 "is_configured": true, 00:24:23.255 "data_offset": 0, 00:24:23.255 "data_size": 65536 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "name": null, 00:24:23.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.255 "is_configured": false, 00:24:23.255 "data_offset": 0, 00:24:23.255 "data_size": 65536 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "name": "BaseBdev3", 00:24:23.255 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:23.255 "is_configured": true, 00:24:23.255 "data_offset": 0, 00:24:23.255 "data_size": 65536 00:24:23.255 }, 00:24:23.255 { 00:24:23.255 "name": "BaseBdev4", 00:24:23.255 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:23.255 "is_configured": true, 00:24:23.255 "data_offset": 0, 00:24:23.255 "data_size": 65536 00:24:23.255 } 00:24:23.255 ] 00:24:23.255 }' 00:24:23.255 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:23.255 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:23.255 11:35:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:23.255 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:23.255 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:24:23.255 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:23.255 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:23.255 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:23.255 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:23.255 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:23.255 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.255 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.512 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:23.512 "name": "raid_bdev1", 00:24:23.512 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:23.512 "strip_size_kb": 0, 00:24:23.512 "state": "online", 00:24:23.512 "raid_level": "raid1", 00:24:23.512 "superblock": false, 00:24:23.512 "num_base_bdevs": 4, 00:24:23.512 "num_base_bdevs_discovered": 3, 00:24:23.512 "num_base_bdevs_operational": 3, 00:24:23.512 "base_bdevs_list": [ 00:24:23.512 { 00:24:23.512 "name": "spare", 00:24:23.512 "uuid": "7cc00538-7d21-5afe-85af-2c49a2953eb4", 00:24:23.512 "is_configured": true, 00:24:23.512 "data_offset": 0, 00:24:23.512 "data_size": 65536 00:24:23.512 }, 00:24:23.512 { 00:24:23.512 "name": null, 00:24:23.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.512 "is_configured": false, 00:24:23.513 "data_offset": 0, 00:24:23.513 "data_size": 65536 00:24:23.513 }, 00:24:23.513 { 00:24:23.513 "name": "BaseBdev3", 00:24:23.513 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:23.513 "is_configured": true, 00:24:23.513 "data_offset": 0, 00:24:23.513 "data_size": 65536 00:24:23.513 }, 00:24:23.513 { 00:24:23.513 "name": "BaseBdev4", 00:24:23.513 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:23.513 "is_configured": true, 00:24:23.513 "data_offset": 0, 00:24:23.513 "data_size": 65536 00:24:23.513 } 00:24:23.513 ] 00:24:23.513 }' 00:24:23.513 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:23.513 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:23.513 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.771 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.029 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.029 "name": "raid_bdev1", 00:24:24.029 "uuid": "d67b06e3-6a49-4113-bff6-9014fe58c75f", 00:24:24.029 "strip_size_kb": 0, 00:24:24.029 "state": "online", 00:24:24.029 "raid_level": "raid1", 00:24:24.029 "superblock": false, 00:24:24.029 "num_base_bdevs": 4, 00:24:24.029 "num_base_bdevs_discovered": 3, 00:24:24.029 "num_base_bdevs_operational": 3, 00:24:24.029 "base_bdevs_list": [ 00:24:24.029 { 00:24:24.029 "name": "spare", 00:24:24.029 "uuid": "7cc00538-7d21-5afe-85af-2c49a2953eb4", 00:24:24.029 "is_configured": true, 00:24:24.029 "data_offset": 0, 00:24:24.029 "data_size": 65536 00:24:24.029 }, 00:24:24.029 { 00:24:24.029 "name": null, 00:24:24.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.029 "is_configured": false, 00:24:24.029 "data_offset": 0, 00:24:24.029 "data_size": 65536 00:24:24.029 }, 00:24:24.029 { 00:24:24.029 "name": "BaseBdev3", 00:24:24.029 "uuid": "24ad6ae0-0eb5-5eb9-ad75-09d33e81eef3", 00:24:24.029 "is_configured": true, 00:24:24.029 "data_offset": 0, 00:24:24.029 "data_size": 65536 00:24:24.029 }, 00:24:24.029 { 00:24:24.029 "name": "BaseBdev4", 00:24:24.029 "uuid": "03e495d4-3500-54e5-a6b3-0e4e4e6c36e9", 00:24:24.029 "is_configured": true, 00:24:24.029 "data_offset": 0, 00:24:24.029 "data_size": 65536 00:24:24.029 } 00:24:24.029 ] 00:24:24.029 }' 00:24:24.029 11:35:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.029 11:35:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:24.595 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:24.852 [2024-07-25 11:35:40.549341] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:24.852 [2024-07-25 11:35:40.549422] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:24.852 00:24:24.852 Latency(us) 00:24:24.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.852 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:24.852 raid_bdev1 : 11.80 94.95 284.86 0.00 0.00 14081.21 290.44 124875.87 00:24:24.852 =================================================================================================================== 00:24:24.852 Total : 94.95 284.86 0.00 0.00 14081.21 290.44 124875.87 00:24:24.852 [2024-07-25 11:35:40.588661] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.852 0 00:24:24.852 [2024-07-25 11:35:40.588847] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:24.852 [2024-07-25 11:35:40.588988] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:24.852 [2024-07-25 11:35:40.589012] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:24.852 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.852 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:25.110 11:35:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:25.368 /dev/nbd0 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:25.368 1+0 records in 00:24:25.368 1+0 records out 00:24:25.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535917 s, 7.6 MB/s 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # continue 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:25.368 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:25.627 /dev/nbd1 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:25.627 1+0 records in 00:24:25.627 1+0 records out 00:24:25.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644292 s, 6.4 MB/s 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:25.627 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:25.886 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:25.886 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:25.886 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:25.886 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:25.886 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:25.886 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:25.886 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:26.143 11:35:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:26.402 /dev/nbd1 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:26.402 1+0 records in 00:24:26.402 1+0 records out 00:24:26.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060248 s, 6.8 MB/s 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:26.402 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:26.660 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 90018 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 90018 ']' 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 90018 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90018 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:26.918 killing process with pid 90018 00:24:26.918 Received shutdown signal, test time was about 13.955381 seconds 00:24:26.918 00:24:26.918 Latency(us) 00:24:26.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.918 =================================================================================================================== 00:24:26.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90018' 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 90018 00:24:26.918 [2024-07-25 11:35:42.729507] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:26.918 11:35:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 90018 00:24:27.522 [2024-07-25 11:35:43.094219] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:28.457 11:35:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:24:28.457 00:24:28.457 real 0m21.326s 00:24:28.457 user 0m33.376s 00:24:28.457 sys 0m2.681s 00:24:28.457 11:35:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.457 ************************************ 00:24:28.457 END TEST raid_rebuild_test_io 00:24:28.457 11:35:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:24:28.457 ************************************ 00:24:28.715 11:35:44 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:24:28.715 11:35:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:28.715 11:35:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:28.715 11:35:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:28.715 ************************************ 00:24:28.715 START TEST raid_rebuild_test_sb_io 00:24:28.715 ************************************ 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:28.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=90505 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 90505 /var/tmp/spdk-raid.sock 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 90505 ']' 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.715 11:35:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:28.715 [2024-07-25 11:35:44.455943] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:24:28.715 [2024-07-25 11:35:44.456332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90505 ] 00:24:28.715 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:28.715 Zero copy mechanism will not be used. 00:24:28.973 [2024-07-25 11:35:44.623443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.231 [2024-07-25 11:35:44.891114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.231 [2024-07-25 11:35:45.092206] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.231 [2024-07-25 11:35:45.092526] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:29.797 11:35:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:29.797 11:35:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:24:29.797 11:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:29.797 11:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:30.054 BaseBdev1_malloc 00:24:30.054 11:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:30.311 [2024-07-25 11:35:45.937497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:30.311 [2024-07-25 11:35:45.937596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.311 [2024-07-25 11:35:45.937673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:30.311 [2024-07-25 11:35:45.937704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.311 [2024-07-25 11:35:45.940738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.311 [2024-07-25 11:35:45.940789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:30.311 BaseBdev1 00:24:30.311 11:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:30.311 11:35:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:30.568 BaseBdev2_malloc 00:24:30.568 11:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:30.825 [2024-07-25 11:35:46.530457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:30.825 [2024-07-25 11:35:46.530573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.825 [2024-07-25 11:35:46.530666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:30.825 [2024-07-25 11:35:46.530697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.825 [2024-07-25 11:35:46.533586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.825 [2024-07-25 11:35:46.533652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:30.825 BaseBdev2 00:24:30.825 11:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:30.825 11:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:31.083 BaseBdev3_malloc 00:24:31.083 11:35:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:31.342 [2024-07-25 11:35:47.063443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:31.342 [2024-07-25 11:35:47.063541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.342 [2024-07-25 11:35:47.063600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:31.342 [2024-07-25 11:35:47.063661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.342 [2024-07-25 11:35:47.066620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.342 [2024-07-25 11:35:47.066683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:31.342 BaseBdev3 00:24:31.342 11:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:31.342 11:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:31.600 BaseBdev4_malloc 00:24:31.600 11:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:31.858 [2024-07-25 11:35:47.627268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:31.858 [2024-07-25 11:35:47.627378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.858 [2024-07-25 11:35:47.627441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:31.858 [2024-07-25 11:35:47.627471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.858 [2024-07-25 11:35:47.630400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.858 [2024-07-25 11:35:47.630453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:31.858 BaseBdev4 00:24:31.858 11:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:32.116 spare_malloc 00:24:32.116 11:35:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:32.374 spare_delay 00:24:32.374 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:32.632 [2024-07-25 11:35:48.351236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:32.632 [2024-07-25 11:35:48.351338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.632 [2024-07-25 11:35:48.351397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:32.632 [2024-07-25 11:35:48.351423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.632 [2024-07-25 11:35:48.354484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.632 [2024-07-25 11:35:48.354536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:32.632 spare 00:24:32.632 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:32.891 [2024-07-25 11:35:48.623412] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:32.891 [2024-07-25 11:35:48.625993] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:32.891 [2024-07-25 11:35:48.626100] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:32.891 [2024-07-25 11:35:48.626182] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:32.891 [2024-07-25 11:35:48.626462] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:32.891 [2024-07-25 11:35:48.626482] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:32.891 [2024-07-25 11:35:48.626888] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:32.891 [2024-07-25 11:35:48.627116] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:32.891 [2024-07-25 11:35:48.627145] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:32.891 [2024-07-25 11:35:48.627415] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.891 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.150 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.150 "name": "raid_bdev1", 00:24:33.150 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:33.150 "strip_size_kb": 0, 00:24:33.150 "state": "online", 00:24:33.150 "raid_level": "raid1", 00:24:33.150 "superblock": true, 00:24:33.150 "num_base_bdevs": 4, 00:24:33.150 "num_base_bdevs_discovered": 4, 00:24:33.150 "num_base_bdevs_operational": 4, 00:24:33.150 "base_bdevs_list": [ 00:24:33.150 { 00:24:33.150 "name": "BaseBdev1", 00:24:33.150 "uuid": "ac04fddb-032b-5285-8c9b-4bca6a5b8fa3", 00:24:33.150 "is_configured": true, 00:24:33.150 "data_offset": 2048, 00:24:33.150 "data_size": 63488 00:24:33.150 }, 00:24:33.150 { 00:24:33.150 "name": "BaseBdev2", 00:24:33.150 "uuid": "ecccad70-2c67-5f5b-98e2-408f5582b36a", 00:24:33.150 "is_configured": true, 00:24:33.150 "data_offset": 2048, 00:24:33.150 "data_size": 63488 00:24:33.150 }, 00:24:33.150 { 00:24:33.150 "name": "BaseBdev3", 00:24:33.150 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:33.150 "is_configured": true, 00:24:33.150 "data_offset": 2048, 00:24:33.150 "data_size": 63488 00:24:33.150 }, 00:24:33.150 { 00:24:33.150 "name": "BaseBdev4", 00:24:33.150 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:33.150 "is_configured": true, 00:24:33.150 "data_offset": 2048, 00:24:33.150 "data_size": 63488 00:24:33.150 } 00:24:33.150 ] 00:24:33.150 }' 00:24:33.150 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.150 11:35:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:34.105 11:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:34.105 11:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:34.105 [2024-07-25 11:35:49.884145] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:34.105 11:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:24:34.105 11:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.105 11:35:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:34.363 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:24:34.363 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:24:34.363 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:34.363 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:34.621 [2024-07-25 11:35:50.331827] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:34.621 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:34.621 Zero copy mechanism will not be used. 00:24:34.621 Running I/O for 60 seconds... 00:24:34.621 [2024-07-25 11:35:50.411395] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:34.621 [2024-07-25 11:35:50.411737] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.621 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.879 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:34.879 "name": "raid_bdev1", 00:24:34.879 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:34.879 "strip_size_kb": 0, 00:24:34.879 "state": "online", 00:24:34.879 "raid_level": "raid1", 00:24:34.879 "superblock": true, 00:24:34.879 "num_base_bdevs": 4, 00:24:34.879 "num_base_bdevs_discovered": 3, 00:24:34.879 "num_base_bdevs_operational": 3, 00:24:34.879 "base_bdevs_list": [ 00:24:34.879 { 00:24:34.879 "name": null, 00:24:34.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.879 "is_configured": false, 00:24:34.879 "data_offset": 2048, 00:24:34.879 "data_size": 63488 00:24:34.879 }, 00:24:34.879 { 00:24:34.879 "name": "BaseBdev2", 00:24:34.879 "uuid": "ecccad70-2c67-5f5b-98e2-408f5582b36a", 00:24:34.879 "is_configured": true, 00:24:34.879 "data_offset": 2048, 00:24:34.879 "data_size": 63488 00:24:34.879 }, 00:24:34.879 { 00:24:34.879 "name": "BaseBdev3", 00:24:34.879 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:34.879 "is_configured": true, 00:24:34.879 "data_offset": 2048, 00:24:34.879 "data_size": 63488 00:24:34.879 }, 00:24:34.879 { 00:24:34.879 "name": "BaseBdev4", 00:24:34.879 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:34.879 "is_configured": true, 00:24:34.879 "data_offset": 2048, 00:24:34.879 "data_size": 63488 00:24:34.879 } 00:24:34.879 ] 00:24:34.879 }' 00:24:34.879 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:34.879 11:35:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:35.814 11:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:35.814 [2024-07-25 11:35:51.652706] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:36.073 11:35:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:36.073 [2024-07-25 11:35:51.720195] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:36.073 [2024-07-25 11:35:51.722702] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:36.073 [2024-07-25 11:35:51.850419] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:36.073 [2024-07-25 11:35:51.852129] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:36.331 [2024-07-25 11:35:52.058175] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:36.331 [2024-07-25 11:35:52.058558] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:36.589 [2024-07-25 11:35:52.407605] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:36.847 [2024-07-25 11:35:52.531700] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:36.847 11:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:36.847 11:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:36.847 11:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:36.847 11:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:36.847 11:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:36.847 11:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.847 11:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.104 [2024-07-25 11:35:52.759558] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:37.104 [2024-07-25 11:35:52.760219] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:37.104 [2024-07-25 11:35:52.901626] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:37.362 11:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:37.362 "name": "raid_bdev1", 00:24:37.362 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:37.362 "strip_size_kb": 0, 00:24:37.362 "state": "online", 00:24:37.362 "raid_level": "raid1", 00:24:37.362 "superblock": true, 00:24:37.362 "num_base_bdevs": 4, 00:24:37.362 "num_base_bdevs_discovered": 4, 00:24:37.362 "num_base_bdevs_operational": 4, 00:24:37.363 "process": { 00:24:37.363 "type": "rebuild", 00:24:37.363 "target": "spare", 00:24:37.363 "progress": { 00:24:37.363 "blocks": 16384, 00:24:37.363 "percent": 25 00:24:37.363 } 00:24:37.363 }, 00:24:37.363 "base_bdevs_list": [ 00:24:37.363 { 00:24:37.363 "name": "spare", 00:24:37.363 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:37.363 "is_configured": true, 00:24:37.363 "data_offset": 2048, 00:24:37.363 "data_size": 63488 00:24:37.363 }, 00:24:37.363 { 00:24:37.363 "name": "BaseBdev2", 00:24:37.363 "uuid": "ecccad70-2c67-5f5b-98e2-408f5582b36a", 00:24:37.363 "is_configured": true, 00:24:37.363 "data_offset": 2048, 00:24:37.363 "data_size": 63488 00:24:37.363 }, 00:24:37.363 { 00:24:37.363 "name": "BaseBdev3", 00:24:37.363 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:37.363 "is_configured": true, 00:24:37.363 "data_offset": 2048, 00:24:37.363 "data_size": 63488 00:24:37.363 }, 00:24:37.363 { 00:24:37.363 "name": "BaseBdev4", 00:24:37.363 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:37.363 "is_configured": true, 00:24:37.363 "data_offset": 2048, 00:24:37.363 "data_size": 63488 00:24:37.363 } 00:24:37.363 ] 00:24:37.363 }' 00:24:37.363 11:35:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:37.363 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:37.363 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:37.363 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:37.363 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:37.363 [2024-07-25 11:35:53.137269] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:37.621 [2024-07-25 11:35:53.356150] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:37.621 [2024-07-25 11:35:53.387530] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:37.621 [2024-07-25 11:35:53.490820] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:37.878 [2024-07-25 11:35:53.512161] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.878 [2024-07-25 11:35:53.512280] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:37.878 [2024-07-25 11:35:53.512307] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:37.878 [2024-07-25 11:35:53.535516] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.878 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.879 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.136 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:38.136 "name": "raid_bdev1", 00:24:38.136 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:38.136 "strip_size_kb": 0, 00:24:38.136 "state": "online", 00:24:38.136 "raid_level": "raid1", 00:24:38.136 "superblock": true, 00:24:38.136 "num_base_bdevs": 4, 00:24:38.136 "num_base_bdevs_discovered": 3, 00:24:38.136 "num_base_bdevs_operational": 3, 00:24:38.136 "base_bdevs_list": [ 00:24:38.136 { 00:24:38.136 "name": null, 00:24:38.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.136 "is_configured": false, 00:24:38.136 "data_offset": 2048, 00:24:38.136 "data_size": 63488 00:24:38.136 }, 00:24:38.136 { 00:24:38.136 "name": "BaseBdev2", 00:24:38.136 "uuid": "ecccad70-2c67-5f5b-98e2-408f5582b36a", 00:24:38.136 "is_configured": true, 00:24:38.136 "data_offset": 2048, 00:24:38.136 "data_size": 63488 00:24:38.136 }, 00:24:38.136 { 00:24:38.136 "name": "BaseBdev3", 00:24:38.136 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:38.136 "is_configured": true, 00:24:38.136 "data_offset": 2048, 00:24:38.136 "data_size": 63488 00:24:38.136 }, 00:24:38.136 { 00:24:38.136 "name": "BaseBdev4", 00:24:38.136 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:38.136 "is_configured": true, 00:24:38.136 "data_offset": 2048, 00:24:38.136 "data_size": 63488 00:24:38.136 } 00:24:38.136 ] 00:24:38.136 }' 00:24:38.136 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:38.136 11:35:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:38.702 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:38.702 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:38.702 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:38.702 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:38.702 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:38.702 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.702 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.266 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:39.266 "name": "raid_bdev1", 00:24:39.266 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:39.266 "strip_size_kb": 0, 00:24:39.266 "state": "online", 00:24:39.266 "raid_level": "raid1", 00:24:39.266 "superblock": true, 00:24:39.266 "num_base_bdevs": 4, 00:24:39.266 "num_base_bdevs_discovered": 3, 00:24:39.266 "num_base_bdevs_operational": 3, 00:24:39.266 "base_bdevs_list": [ 00:24:39.266 { 00:24:39.266 "name": null, 00:24:39.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.266 "is_configured": false, 00:24:39.266 "data_offset": 2048, 00:24:39.266 "data_size": 63488 00:24:39.266 }, 00:24:39.266 { 00:24:39.266 "name": "BaseBdev2", 00:24:39.266 "uuid": "ecccad70-2c67-5f5b-98e2-408f5582b36a", 00:24:39.266 "is_configured": true, 00:24:39.266 "data_offset": 2048, 00:24:39.266 "data_size": 63488 00:24:39.266 }, 00:24:39.266 { 00:24:39.267 "name": "BaseBdev3", 00:24:39.267 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:39.267 "is_configured": true, 00:24:39.267 "data_offset": 2048, 00:24:39.267 "data_size": 63488 00:24:39.267 }, 00:24:39.267 { 00:24:39.267 "name": "BaseBdev4", 00:24:39.267 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:39.267 "is_configured": true, 00:24:39.267 "data_offset": 2048, 00:24:39.267 "data_size": 63488 00:24:39.267 } 00:24:39.267 ] 00:24:39.267 }' 00:24:39.267 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:39.267 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:39.267 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:39.267 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:39.267 11:35:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:39.523 [2024-07-25 11:35:55.181861] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:39.523 11:35:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:24:39.523 [2024-07-25 11:35:55.278363] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:39.523 [2024-07-25 11:35:55.280865] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:39.523 [2024-07-25 11:35:55.384566] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:39.523 [2024-07-25 11:35:55.385250] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:39.800 [2024-07-25 11:35:55.509595] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:39.800 [2024-07-25 11:35:55.509998] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:40.059 [2024-07-25 11:35:55.924749] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:40.624 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.624 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:40.624 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:40.624 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:40.624 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:40.624 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.624 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.624 [2024-07-25 11:35:56.290682] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:40.882 [2024-07-25 11:35:56.526969] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:40.882 "name": "raid_bdev1", 00:24:40.882 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:40.882 "strip_size_kb": 0, 00:24:40.882 "state": "online", 00:24:40.882 "raid_level": "raid1", 00:24:40.882 "superblock": true, 00:24:40.882 "num_base_bdevs": 4, 00:24:40.882 "num_base_bdevs_discovered": 4, 00:24:40.882 "num_base_bdevs_operational": 4, 00:24:40.882 "process": { 00:24:40.882 "type": "rebuild", 00:24:40.882 "target": "spare", 00:24:40.882 "progress": { 00:24:40.882 "blocks": 16384, 00:24:40.882 "percent": 25 00:24:40.882 } 00:24:40.882 }, 00:24:40.882 "base_bdevs_list": [ 00:24:40.882 { 00:24:40.882 "name": "spare", 00:24:40.882 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:40.882 "is_configured": true, 00:24:40.882 "data_offset": 2048, 00:24:40.882 "data_size": 63488 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "name": "BaseBdev2", 00:24:40.882 "uuid": "ecccad70-2c67-5f5b-98e2-408f5582b36a", 00:24:40.882 "is_configured": true, 00:24:40.882 "data_offset": 2048, 00:24:40.882 "data_size": 63488 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "name": "BaseBdev3", 00:24:40.882 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:40.882 "is_configured": true, 00:24:40.882 "data_offset": 2048, 00:24:40.882 "data_size": 63488 00:24:40.882 }, 00:24:40.882 { 00:24:40.882 "name": "BaseBdev4", 00:24:40.882 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:40.882 "is_configured": true, 00:24:40.882 "data_offset": 2048, 00:24:40.882 "data_size": 63488 00:24:40.882 } 00:24:40.882 ] 00:24:40.882 }' 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:24:40.882 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:24:40.882 11:35:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:41.139 [2024-07-25 11:35:56.902895] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:24:41.139 [2024-07-25 11:35:56.969095] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:41.397 [2024-07-25 11:35:57.160565] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:41.397 [2024-07-25 11:35:57.161514] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:41.655 [2024-07-25 11:35:57.373761] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:24:41.655 [2024-07-25 11:35:57.374029] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:24:41.655 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:24:41.655 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:24:41.655 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.655 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:41.655 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:41.655 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:41.655 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:41.655 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.655 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.913 [2024-07-25 11:35:57.638109] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:24:41.913 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:41.913 "name": "raid_bdev1", 00:24:41.913 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:41.913 "strip_size_kb": 0, 00:24:41.913 "state": "online", 00:24:41.913 "raid_level": "raid1", 00:24:41.913 "superblock": true, 00:24:41.914 "num_base_bdevs": 4, 00:24:41.914 "num_base_bdevs_discovered": 3, 00:24:41.914 "num_base_bdevs_operational": 3, 00:24:41.914 "process": { 00:24:41.914 "type": "rebuild", 00:24:41.914 "target": "spare", 00:24:41.914 "progress": { 00:24:41.914 "blocks": 26624, 00:24:41.914 "percent": 41 00:24:41.914 } 00:24:41.914 }, 00:24:41.914 "base_bdevs_list": [ 00:24:41.914 { 00:24:41.914 "name": "spare", 00:24:41.914 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:41.914 "is_configured": true, 00:24:41.914 "data_offset": 2048, 00:24:41.914 "data_size": 63488 00:24:41.914 }, 00:24:41.914 { 00:24:41.914 "name": null, 00:24:41.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.914 "is_configured": false, 00:24:41.914 "data_offset": 2048, 00:24:41.914 "data_size": 63488 00:24:41.914 }, 00:24:41.914 { 00:24:41.914 "name": "BaseBdev3", 00:24:41.914 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:41.914 "is_configured": true, 00:24:41.914 "data_offset": 2048, 00:24:41.914 "data_size": 63488 00:24:41.914 }, 00:24:41.914 { 00:24:41.914 "name": "BaseBdev4", 00:24:41.914 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:41.914 "is_configured": true, 00:24:41.914 "data_offset": 2048, 00:24:41.914 "data_size": 63488 00:24:41.914 } 00:24:41.914 ] 00:24:41.914 }' 00:24:41.914 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:41.914 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.914 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:42.171 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.171 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=1121 00:24:42.171 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:42.171 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:42.171 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:42.171 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:42.171 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:42.171 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:42.171 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.171 11:35:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.430 [2024-07-25 11:35:58.103956] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:42.430 11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:42.430 "name": "raid_bdev1", 00:24:42.430 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:42.430 "strip_size_kb": 0, 00:24:42.430 "state": "online", 00:24:42.430 "raid_level": "raid1", 00:24:42.430 "superblock": true, 00:24:42.430 "num_base_bdevs": 4, 00:24:42.430 "num_base_bdevs_discovered": 3, 00:24:42.430 "num_base_bdevs_operational": 3, 00:24:42.430 "process": { 00:24:42.430 "type": "rebuild", 00:24:42.430 "target": "spare", 00:24:42.430 "progress": { 00:24:42.430 "blocks": 32768, 00:24:42.430 "percent": 51 00:24:42.430 } 00:24:42.430 }, 00:24:42.430 "base_bdevs_list": [ 00:24:42.430 { 00:24:42.430 "name": "spare", 00:24:42.430 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:42.430 "is_configured": true, 00:24:42.430 "data_offset": 2048, 00:24:42.430 "data_size": 63488 00:24:42.430 }, 00:24:42.430 { 00:24:42.430 "name": null, 00:24:42.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.430 "is_configured": false, 00:24:42.430 "data_offset": 2048, 00:24:42.430 "data_size": 63488 00:24:42.430 }, 00:24:42.430 { 00:24:42.430 "name": "BaseBdev3", 00:24:42.430 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:42.430 "is_configured": true, 00:24:42.430 "data_offset": 2048, 00:24:42.430 "data_size": 63488 00:24:42.430 }, 00:24:42.430 { 00:24:42.430 "name": "BaseBdev4", 00:24:42.430 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:42.430 "is_configured": true, 00:24:42.430 "data_offset": 2048, 00:24:42.430 "data_size": 63488 00:24:42.430 } 00:24:42.430 ] 00:24:42.430 }' 00:24:42.430 11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:42.430 11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:42.430 11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:42.430 [2024-07-25 11:35:58.213996] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:42.430 11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.430 11:35:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:43.803 [2024-07-25 11:35:59.255011] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:43.803 [2024-07-25 11:35:59.256230] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.803 [2024-07-25 11:35:59.468366] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:43.803 "name": "raid_bdev1", 00:24:43.803 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:43.803 "strip_size_kb": 0, 00:24:43.803 "state": "online", 00:24:43.803 "raid_level": "raid1", 00:24:43.803 "superblock": true, 00:24:43.803 "num_base_bdevs": 4, 00:24:43.803 "num_base_bdevs_discovered": 3, 00:24:43.803 "num_base_bdevs_operational": 3, 00:24:43.803 "process": { 00:24:43.803 "type": "rebuild", 00:24:43.803 "target": "spare", 00:24:43.803 "progress": { 00:24:43.803 "blocks": 53248, 00:24:43.803 "percent": 83 00:24:43.803 } 00:24:43.803 }, 00:24:43.803 "base_bdevs_list": [ 00:24:43.803 { 00:24:43.803 "name": "spare", 00:24:43.803 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:43.803 "is_configured": true, 00:24:43.803 "data_offset": 2048, 00:24:43.803 "data_size": 63488 00:24:43.803 }, 00:24:43.803 { 00:24:43.803 "name": null, 00:24:43.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.803 "is_configured": false, 00:24:43.803 "data_offset": 2048, 00:24:43.803 "data_size": 63488 00:24:43.803 }, 00:24:43.803 { 00:24:43.803 "name": "BaseBdev3", 00:24:43.803 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:43.803 "is_configured": true, 00:24:43.803 "data_offset": 2048, 00:24:43.803 "data_size": 63488 00:24:43.803 }, 00:24:43.803 { 00:24:43.803 "name": "BaseBdev4", 00:24:43.803 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:43.803 "is_configured": true, 00:24:43.803 "data_offset": 2048, 00:24:43.803 "data_size": 63488 00:24:43.803 } 00:24:43.803 ] 00:24:43.803 }' 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.803 11:35:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:44.370 [2024-07-25 11:36:00.134967] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:44.370 [2024-07-25 11:36:00.242792] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:44.370 [2024-07-25 11:36:00.247515] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.981 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:44.981 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.981 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:44.981 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:44.981 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:44.981 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:44.981 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.981 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.239 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.239 "name": "raid_bdev1", 00:24:45.239 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:45.239 "strip_size_kb": 0, 00:24:45.239 "state": "online", 00:24:45.239 "raid_level": "raid1", 00:24:45.239 "superblock": true, 00:24:45.239 "num_base_bdevs": 4, 00:24:45.239 "num_base_bdevs_discovered": 3, 00:24:45.239 "num_base_bdevs_operational": 3, 00:24:45.239 "base_bdevs_list": [ 00:24:45.239 { 00:24:45.239 "name": "spare", 00:24:45.239 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:45.239 "is_configured": true, 00:24:45.239 "data_offset": 2048, 00:24:45.239 "data_size": 63488 00:24:45.239 }, 00:24:45.239 { 00:24:45.239 "name": null, 00:24:45.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.239 "is_configured": false, 00:24:45.239 "data_offset": 2048, 00:24:45.239 "data_size": 63488 00:24:45.239 }, 00:24:45.239 { 00:24:45.239 "name": "BaseBdev3", 00:24:45.239 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:45.239 "is_configured": true, 00:24:45.239 "data_offset": 2048, 00:24:45.239 "data_size": 63488 00:24:45.239 }, 00:24:45.239 { 00:24:45.239 "name": "BaseBdev4", 00:24:45.239 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:45.239 "is_configured": true, 00:24:45.239 "data_offset": 2048, 00:24:45.239 "data_size": 63488 00:24:45.239 } 00:24:45.239 ] 00:24:45.239 }' 00:24:45.239 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:45.239 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:45.239 11:36:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.239 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:45.239 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:24:45.239 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:45.239 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:45.239 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:45.239 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:45.239 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:45.239 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.239 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.497 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.497 "name": "raid_bdev1", 00:24:45.497 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:45.497 "strip_size_kb": 0, 00:24:45.497 "state": "online", 00:24:45.497 "raid_level": "raid1", 00:24:45.497 "superblock": true, 00:24:45.497 "num_base_bdevs": 4, 00:24:45.497 "num_base_bdevs_discovered": 3, 00:24:45.497 "num_base_bdevs_operational": 3, 00:24:45.497 "base_bdevs_list": [ 00:24:45.497 { 00:24:45.497 "name": "spare", 00:24:45.497 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:45.497 "is_configured": true, 00:24:45.497 "data_offset": 2048, 00:24:45.497 "data_size": 63488 00:24:45.497 }, 00:24:45.497 { 00:24:45.497 "name": null, 00:24:45.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.497 "is_configured": false, 00:24:45.497 "data_offset": 2048, 00:24:45.497 "data_size": 63488 00:24:45.497 }, 00:24:45.497 { 00:24:45.497 "name": "BaseBdev3", 00:24:45.497 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:45.497 "is_configured": true, 00:24:45.497 "data_offset": 2048, 00:24:45.497 "data_size": 63488 00:24:45.497 }, 00:24:45.497 { 00:24:45.497 "name": "BaseBdev4", 00:24:45.497 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:45.497 "is_configured": true, 00:24:45.497 "data_offset": 2048, 00:24:45.497 "data_size": 63488 00:24:45.497 } 00:24:45.497 ] 00:24:45.497 }' 00:24:45.497 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:45.497 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:45.497 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:45.497 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:45.497 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:45.497 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:45.497 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:45.497 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:45.497 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:45.498 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:45.498 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:45.498 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.498 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.498 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.756 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.756 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.015 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:46.015 "name": "raid_bdev1", 00:24:46.015 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:46.015 "strip_size_kb": 0, 00:24:46.015 "state": "online", 00:24:46.015 "raid_level": "raid1", 00:24:46.015 "superblock": true, 00:24:46.015 "num_base_bdevs": 4, 00:24:46.015 "num_base_bdevs_discovered": 3, 00:24:46.015 "num_base_bdevs_operational": 3, 00:24:46.015 "base_bdevs_list": [ 00:24:46.015 { 00:24:46.015 "name": "spare", 00:24:46.015 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:46.015 "is_configured": true, 00:24:46.015 "data_offset": 2048, 00:24:46.015 "data_size": 63488 00:24:46.015 }, 00:24:46.015 { 00:24:46.015 "name": null, 00:24:46.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.015 "is_configured": false, 00:24:46.015 "data_offset": 2048, 00:24:46.015 "data_size": 63488 00:24:46.015 }, 00:24:46.015 { 00:24:46.015 "name": "BaseBdev3", 00:24:46.015 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:46.015 "is_configured": true, 00:24:46.015 "data_offset": 2048, 00:24:46.015 "data_size": 63488 00:24:46.015 }, 00:24:46.015 { 00:24:46.015 "name": "BaseBdev4", 00:24:46.015 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:46.015 "is_configured": true, 00:24:46.015 "data_offset": 2048, 00:24:46.015 "data_size": 63488 00:24:46.015 } 00:24:46.015 ] 00:24:46.015 }' 00:24:46.015 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:46.015 11:36:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:46.581 11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:46.839 [2024-07-25 11:36:02.582644] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:46.839 [2024-07-25 11:36:02.582894] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.839 00:24:46.839 Latency(us) 00:24:46.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.840 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:46.840 raid_bdev1 : 12.34 88.95 266.85 0.00 0.00 15894.97 307.20 122969.37 00:24:46.840 =================================================================================================================== 00:24:46.840 Total : 88.95 266.85 0.00 0.00 15894.97 307.20 122969.37 00:24:46.840 [2024-07-25 11:36:02.698869] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.840 0 00:24:46.840 [2024-07-25 11:36:02.699217] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.840 [2024-07-25 11:36:02.699382] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.840 [2024-07-25 11:36:02.699402] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:47.098 11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.098 11:36:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:47.356 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:47.356 /dev/nbd0 00:24:47.614 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:47.614 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:47.615 1+0 records in 00:24:47.615 1+0 records out 00:24:47.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289926 s, 14.1 MB/s 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # continue 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:47.615 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:47.873 /dev/nbd1 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:47.873 1+0 records in 00:24:47.873 1+0 records out 00:24:47.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359291 s, 11.4 MB/s 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:47.873 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:48.131 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:48.131 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:48.131 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:48.131 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:48.131 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:48.131 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:48.131 11:36:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:48.390 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:48.649 /dev/nbd1 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:48.649 1+0 records in 00:24:48.649 1+0 records out 00:24:48.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370951 s, 11.0 MB/s 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:48.649 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:48.650 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:48.650 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:48.908 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:48.908 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:48.908 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:48.908 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:48.908 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:48.908 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:48.908 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:48.908 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:48.908 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:48.909 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:48.909 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:48.909 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:48.909 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:24:48.909 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:48.909 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:49.167 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:49.167 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:49.167 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:49.167 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:49.167 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:49.167 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:49.167 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:24:49.167 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:24:49.167 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:24:49.167 11:36:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:49.425 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:49.683 [2024-07-25 11:36:05.510182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:49.683 [2024-07-25 11:36:05.510263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.683 [2024-07-25 11:36:05.510303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:49.683 [2024-07-25 11:36:05.510320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.683 [2024-07-25 11:36:05.513223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.683 [2024-07-25 11:36:05.513274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:49.683 [2024-07-25 11:36:05.513410] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:49.683 [2024-07-25 11:36:05.513480] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:49.683 [2024-07-25 11:36:05.513710] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:49.683 [2024-07-25 11:36:05.513858] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:49.683 spare 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.683 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.941 [2024-07-25 11:36:05.614009] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:49.941 [2024-07-25 11:36:05.614282] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:49.941 [2024-07-25 11:36:05.614812] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:24:49.941 [2024-07-25 11:36:05.615226] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:49.941 [2024-07-25 11:36:05.615263] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:49.941 [2024-07-25 11:36:05.615499] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.199 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:50.199 "name": "raid_bdev1", 00:24:50.199 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:50.199 "strip_size_kb": 0, 00:24:50.199 "state": "online", 00:24:50.199 "raid_level": "raid1", 00:24:50.199 "superblock": true, 00:24:50.199 "num_base_bdevs": 4, 00:24:50.199 "num_base_bdevs_discovered": 3, 00:24:50.199 "num_base_bdevs_operational": 3, 00:24:50.199 "base_bdevs_list": [ 00:24:50.199 { 00:24:50.199 "name": "spare", 00:24:50.199 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:50.199 "is_configured": true, 00:24:50.199 "data_offset": 2048, 00:24:50.199 "data_size": 63488 00:24:50.199 }, 00:24:50.199 { 00:24:50.199 "name": null, 00:24:50.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.199 "is_configured": false, 00:24:50.199 "data_offset": 2048, 00:24:50.199 "data_size": 63488 00:24:50.199 }, 00:24:50.199 { 00:24:50.199 "name": "BaseBdev3", 00:24:50.199 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:50.199 "is_configured": true, 00:24:50.199 "data_offset": 2048, 00:24:50.199 "data_size": 63488 00:24:50.199 }, 00:24:50.199 { 00:24:50.199 "name": "BaseBdev4", 00:24:50.199 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:50.199 "is_configured": true, 00:24:50.199 "data_offset": 2048, 00:24:50.199 "data_size": 63488 00:24:50.199 } 00:24:50.199 ] 00:24:50.199 }' 00:24:50.199 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:50.199 11:36:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:50.772 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:50.772 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:50.772 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:50.772 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:50.772 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:50.772 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.772 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.030 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:51.030 "name": "raid_bdev1", 00:24:51.030 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:51.030 "strip_size_kb": 0, 00:24:51.030 "state": "online", 00:24:51.030 "raid_level": "raid1", 00:24:51.030 "superblock": true, 00:24:51.030 "num_base_bdevs": 4, 00:24:51.030 "num_base_bdevs_discovered": 3, 00:24:51.030 "num_base_bdevs_operational": 3, 00:24:51.030 "base_bdevs_list": [ 00:24:51.030 { 00:24:51.030 "name": "spare", 00:24:51.030 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:51.030 "is_configured": true, 00:24:51.030 "data_offset": 2048, 00:24:51.030 "data_size": 63488 00:24:51.030 }, 00:24:51.030 { 00:24:51.030 "name": null, 00:24:51.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.030 "is_configured": false, 00:24:51.030 "data_offset": 2048, 00:24:51.030 "data_size": 63488 00:24:51.030 }, 00:24:51.030 { 00:24:51.030 "name": "BaseBdev3", 00:24:51.030 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:51.030 "is_configured": true, 00:24:51.030 "data_offset": 2048, 00:24:51.030 "data_size": 63488 00:24:51.030 }, 00:24:51.030 { 00:24:51.030 "name": "BaseBdev4", 00:24:51.030 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:51.030 "is_configured": true, 00:24:51.030 "data_offset": 2048, 00:24:51.030 "data_size": 63488 00:24:51.030 } 00:24:51.030 ] 00:24:51.030 }' 00:24:51.030 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:51.030 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:51.030 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:51.288 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:51.288 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.288 11:36:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:51.546 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.546 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:51.804 [2024-07-25 11:36:07.444329] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.804 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.062 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:52.062 "name": "raid_bdev1", 00:24:52.062 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:52.062 "strip_size_kb": 0, 00:24:52.062 "state": "online", 00:24:52.062 "raid_level": "raid1", 00:24:52.062 "superblock": true, 00:24:52.062 "num_base_bdevs": 4, 00:24:52.062 "num_base_bdevs_discovered": 2, 00:24:52.062 "num_base_bdevs_operational": 2, 00:24:52.062 "base_bdevs_list": [ 00:24:52.062 { 00:24:52.062 "name": null, 00:24:52.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.062 "is_configured": false, 00:24:52.062 "data_offset": 2048, 00:24:52.062 "data_size": 63488 00:24:52.062 }, 00:24:52.062 { 00:24:52.062 "name": null, 00:24:52.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.062 "is_configured": false, 00:24:52.062 "data_offset": 2048, 00:24:52.062 "data_size": 63488 00:24:52.062 }, 00:24:52.062 { 00:24:52.062 "name": "BaseBdev3", 00:24:52.062 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:52.062 "is_configured": true, 00:24:52.062 "data_offset": 2048, 00:24:52.062 "data_size": 63488 00:24:52.062 }, 00:24:52.062 { 00:24:52.062 "name": "BaseBdev4", 00:24:52.062 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:52.062 "is_configured": true, 00:24:52.062 "data_offset": 2048, 00:24:52.062 "data_size": 63488 00:24:52.062 } 00:24:52.062 ] 00:24:52.062 }' 00:24:52.062 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:52.062 11:36:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:52.627 11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:52.885 [2024-07-25 11:36:08.628902] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:52.885 [2024-07-25 11:36:08.629273] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:52.885 [2024-07-25 11:36:08.629322] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:52.885 [2024-07-25 11:36:08.629403] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:52.885 [2024-07-25 11:36:08.647524] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:24:52.885 [2024-07-25 11:36:08.649985] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:52.885 11:36:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:24:53.817 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:53.817 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:53.817 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:53.817 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:53.817 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:53.817 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.817 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.076 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:54.076 "name": "raid_bdev1", 00:24:54.076 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:54.076 "strip_size_kb": 0, 00:24:54.076 "state": "online", 00:24:54.076 "raid_level": "raid1", 00:24:54.076 "superblock": true, 00:24:54.076 "num_base_bdevs": 4, 00:24:54.076 "num_base_bdevs_discovered": 3, 00:24:54.076 "num_base_bdevs_operational": 3, 00:24:54.076 "process": { 00:24:54.076 "type": "rebuild", 00:24:54.076 "target": "spare", 00:24:54.076 "progress": { 00:24:54.076 "blocks": 24576, 00:24:54.076 "percent": 38 00:24:54.076 } 00:24:54.076 }, 00:24:54.076 "base_bdevs_list": [ 00:24:54.076 { 00:24:54.076 "name": "spare", 00:24:54.076 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:54.076 "is_configured": true, 00:24:54.076 "data_offset": 2048, 00:24:54.076 "data_size": 63488 00:24:54.076 }, 00:24:54.076 { 00:24:54.076 "name": null, 00:24:54.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.076 "is_configured": false, 00:24:54.076 "data_offset": 2048, 00:24:54.076 "data_size": 63488 00:24:54.076 }, 00:24:54.076 { 00:24:54.076 "name": "BaseBdev3", 00:24:54.076 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:54.076 "is_configured": true, 00:24:54.076 "data_offset": 2048, 00:24:54.076 "data_size": 63488 00:24:54.076 }, 00:24:54.076 { 00:24:54.076 "name": "BaseBdev4", 00:24:54.076 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:54.076 "is_configured": true, 00:24:54.076 "data_offset": 2048, 00:24:54.076 "data_size": 63488 00:24:54.076 } 00:24:54.076 ] 00:24:54.076 }' 00:24:54.076 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:54.334 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:54.334 11:36:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:54.334 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:54.334 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:54.592 [2024-07-25 11:36:10.288207] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:54.592 [2024-07-25 11:36:10.362835] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:54.592 [2024-07-25 11:36:10.362942] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.592 [2024-07-25 11:36:10.362973] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:54.592 [2024-07-25 11:36:10.362986] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.592 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.850 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.850 "name": "raid_bdev1", 00:24:54.850 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:54.850 "strip_size_kb": 0, 00:24:54.850 "state": "online", 00:24:54.850 "raid_level": "raid1", 00:24:54.850 "superblock": true, 00:24:54.850 "num_base_bdevs": 4, 00:24:54.850 "num_base_bdevs_discovered": 2, 00:24:54.850 "num_base_bdevs_operational": 2, 00:24:54.850 "base_bdevs_list": [ 00:24:54.850 { 00:24:54.850 "name": null, 00:24:54.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.850 "is_configured": false, 00:24:54.850 "data_offset": 2048, 00:24:54.850 "data_size": 63488 00:24:54.850 }, 00:24:54.850 { 00:24:54.850 "name": null, 00:24:54.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.850 "is_configured": false, 00:24:54.850 "data_offset": 2048, 00:24:54.850 "data_size": 63488 00:24:54.850 }, 00:24:54.850 { 00:24:54.850 "name": "BaseBdev3", 00:24:54.850 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:54.850 "is_configured": true, 00:24:54.850 "data_offset": 2048, 00:24:54.850 "data_size": 63488 00:24:54.850 }, 00:24:54.850 { 00:24:54.850 "name": "BaseBdev4", 00:24:54.850 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:54.850 "is_configured": true, 00:24:54.850 "data_offset": 2048, 00:24:54.850 "data_size": 63488 00:24:54.850 } 00:24:54.850 ] 00:24:54.850 }' 00:24:54.850 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.850 11:36:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:55.785 11:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:55.785 [2024-07-25 11:36:11.561539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:55.785 [2024-07-25 11:36:11.561652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.785 [2024-07-25 11:36:11.561703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:24:55.785 [2024-07-25 11:36:11.561721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.785 [2024-07-25 11:36:11.562324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.785 [2024-07-25 11:36:11.562357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:55.785 [2024-07-25 11:36:11.562485] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:55.785 [2024-07-25 11:36:11.562507] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:55.785 [2024-07-25 11:36:11.562525] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:55.785 [2024-07-25 11:36:11.562555] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:55.785 [2024-07-25 11:36:11.575206] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:24:55.785 spare 00:24:55.785 [2024-07-25 11:36:11.577648] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:55.785 11:36:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:24:56.721 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:56.721 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:56.721 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:56.721 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:56.721 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:56.721 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.721 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.288 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:57.288 "name": "raid_bdev1", 00:24:57.288 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:57.288 "strip_size_kb": 0, 00:24:57.288 "state": "online", 00:24:57.288 "raid_level": "raid1", 00:24:57.288 "superblock": true, 00:24:57.288 "num_base_bdevs": 4, 00:24:57.288 "num_base_bdevs_discovered": 3, 00:24:57.288 "num_base_bdevs_operational": 3, 00:24:57.288 "process": { 00:24:57.288 "type": "rebuild", 00:24:57.288 "target": "spare", 00:24:57.288 "progress": { 00:24:57.288 "blocks": 24576, 00:24:57.288 "percent": 38 00:24:57.288 } 00:24:57.288 }, 00:24:57.288 "base_bdevs_list": [ 00:24:57.288 { 00:24:57.288 "name": "spare", 00:24:57.288 "uuid": "3c185377-5084-5335-94d2-d29b3342114f", 00:24:57.288 "is_configured": true, 00:24:57.288 "data_offset": 2048, 00:24:57.288 "data_size": 63488 00:24:57.288 }, 00:24:57.288 { 00:24:57.288 "name": null, 00:24:57.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.288 "is_configured": false, 00:24:57.288 "data_offset": 2048, 00:24:57.288 "data_size": 63488 00:24:57.288 }, 00:24:57.288 { 00:24:57.288 "name": "BaseBdev3", 00:24:57.288 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:57.288 "is_configured": true, 00:24:57.289 "data_offset": 2048, 00:24:57.289 "data_size": 63488 00:24:57.289 }, 00:24:57.289 { 00:24:57.289 "name": "BaseBdev4", 00:24:57.289 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:57.289 "is_configured": true, 00:24:57.289 "data_offset": 2048, 00:24:57.289 "data_size": 63488 00:24:57.289 } 00:24:57.289 ] 00:24:57.289 }' 00:24:57.289 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:57.289 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:57.289 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:57.289 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.289 11:36:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:57.547 [2024-07-25 11:36:13.236575] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:57.547 [2024-07-25 11:36:13.291099] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:57.547 [2024-07-25 11:36:13.291464] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.547 [2024-07-25 11:36:13.291648] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:57.547 [2024-07-25 11:36:13.291792] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.547 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.805 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.805 "name": "raid_bdev1", 00:24:57.805 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:57.805 "strip_size_kb": 0, 00:24:57.805 "state": "online", 00:24:57.805 "raid_level": "raid1", 00:24:57.805 "superblock": true, 00:24:57.805 "num_base_bdevs": 4, 00:24:57.805 "num_base_bdevs_discovered": 2, 00:24:57.805 "num_base_bdevs_operational": 2, 00:24:57.805 "base_bdevs_list": [ 00:24:57.805 { 00:24:57.805 "name": null, 00:24:57.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.805 "is_configured": false, 00:24:57.805 "data_offset": 2048, 00:24:57.805 "data_size": 63488 00:24:57.805 }, 00:24:57.805 { 00:24:57.805 "name": null, 00:24:57.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.805 "is_configured": false, 00:24:57.805 "data_offset": 2048, 00:24:57.805 "data_size": 63488 00:24:57.805 }, 00:24:57.805 { 00:24:57.805 "name": "BaseBdev3", 00:24:57.805 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:57.805 "is_configured": true, 00:24:57.805 "data_offset": 2048, 00:24:57.805 "data_size": 63488 00:24:57.805 }, 00:24:57.805 { 00:24:57.805 "name": "BaseBdev4", 00:24:57.805 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:57.805 "is_configured": true, 00:24:57.805 "data_offset": 2048, 00:24:57.805 "data_size": 63488 00:24:57.805 } 00:24:57.805 ] 00:24:57.805 }' 00:24:57.805 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.805 11:36:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:58.740 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:58.740 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:58.740 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:58.740 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:58.740 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:58.740 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.740 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.740 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:58.740 "name": "raid_bdev1", 00:24:58.740 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:24:58.740 "strip_size_kb": 0, 00:24:58.740 "state": "online", 00:24:58.740 "raid_level": "raid1", 00:24:58.740 "superblock": true, 00:24:58.740 "num_base_bdevs": 4, 00:24:58.740 "num_base_bdevs_discovered": 2, 00:24:58.740 "num_base_bdevs_operational": 2, 00:24:58.740 "base_bdevs_list": [ 00:24:58.740 { 00:24:58.740 "name": null, 00:24:58.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.740 "is_configured": false, 00:24:58.740 "data_offset": 2048, 00:24:58.740 "data_size": 63488 00:24:58.740 }, 00:24:58.740 { 00:24:58.740 "name": null, 00:24:58.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.740 "is_configured": false, 00:24:58.740 "data_offset": 2048, 00:24:58.740 "data_size": 63488 00:24:58.740 }, 00:24:58.740 { 00:24:58.740 "name": "BaseBdev3", 00:24:58.740 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:24:58.740 "is_configured": true, 00:24:58.740 "data_offset": 2048, 00:24:58.740 "data_size": 63488 00:24:58.740 }, 00:24:58.740 { 00:24:58.740 "name": "BaseBdev4", 00:24:58.740 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:24:58.740 "is_configured": true, 00:24:58.740 "data_offset": 2048, 00:24:58.740 "data_size": 63488 00:24:58.740 } 00:24:58.740 ] 00:24:58.740 }' 00:24:58.740 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:58.740 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:58.999 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:58.999 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:58.999 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:59.257 11:36:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:59.516 [2024-07-25 11:36:15.171598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:59.516 [2024-07-25 11:36:15.171895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.516 [2024-07-25 11:36:15.171939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:24:59.516 [2024-07-25 11:36:15.171960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.516 [2024-07-25 11:36:15.172509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.516 [2024-07-25 11:36:15.172542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:59.516 [2024-07-25 11:36:15.172690] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:59.516 [2024-07-25 11:36:15.172721] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:24:59.516 [2024-07-25 11:36:15.172735] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:59.516 BaseBdev1 00:24:59.516 11:36:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.449 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.707 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:00.707 "name": "raid_bdev1", 00:25:00.707 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:25:00.707 "strip_size_kb": 0, 00:25:00.707 "state": "online", 00:25:00.707 "raid_level": "raid1", 00:25:00.707 "superblock": true, 00:25:00.707 "num_base_bdevs": 4, 00:25:00.707 "num_base_bdevs_discovered": 2, 00:25:00.707 "num_base_bdevs_operational": 2, 00:25:00.707 "base_bdevs_list": [ 00:25:00.707 { 00:25:00.707 "name": null, 00:25:00.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.707 "is_configured": false, 00:25:00.707 "data_offset": 2048, 00:25:00.707 "data_size": 63488 00:25:00.707 }, 00:25:00.707 { 00:25:00.707 "name": null, 00:25:00.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.707 "is_configured": false, 00:25:00.707 "data_offset": 2048, 00:25:00.707 "data_size": 63488 00:25:00.707 }, 00:25:00.707 { 00:25:00.707 "name": "BaseBdev3", 00:25:00.707 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:25:00.707 "is_configured": true, 00:25:00.707 "data_offset": 2048, 00:25:00.707 "data_size": 63488 00:25:00.707 }, 00:25:00.707 { 00:25:00.707 "name": "BaseBdev4", 00:25:00.707 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:25:00.707 "is_configured": true, 00:25:00.707 "data_offset": 2048, 00:25:00.707 "data_size": 63488 00:25:00.707 } 00:25:00.707 ] 00:25:00.707 }' 00:25:00.707 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:00.707 11:36:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:01.643 "name": "raid_bdev1", 00:25:01.643 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:25:01.643 "strip_size_kb": 0, 00:25:01.643 "state": "online", 00:25:01.643 "raid_level": "raid1", 00:25:01.643 "superblock": true, 00:25:01.643 "num_base_bdevs": 4, 00:25:01.643 "num_base_bdevs_discovered": 2, 00:25:01.643 "num_base_bdevs_operational": 2, 00:25:01.643 "base_bdevs_list": [ 00:25:01.643 { 00:25:01.643 "name": null, 00:25:01.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.643 "is_configured": false, 00:25:01.643 "data_offset": 2048, 00:25:01.643 "data_size": 63488 00:25:01.643 }, 00:25:01.643 { 00:25:01.643 "name": null, 00:25:01.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.643 "is_configured": false, 00:25:01.643 "data_offset": 2048, 00:25:01.643 "data_size": 63488 00:25:01.643 }, 00:25:01.643 { 00:25:01.643 "name": "BaseBdev3", 00:25:01.643 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:25:01.643 "is_configured": true, 00:25:01.643 "data_offset": 2048, 00:25:01.643 "data_size": 63488 00:25:01.643 }, 00:25:01.643 { 00:25:01.643 "name": "BaseBdev4", 00:25:01.643 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:25:01.643 "is_configured": true, 00:25:01.643 "data_offset": 2048, 00:25:01.643 "data_size": 63488 00:25:01.643 } 00:25:01.643 ] 00:25:01.643 }' 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:01.643 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:01.901 [2024-07-25 11:36:17.752665] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:01.901 [2024-07-25 11:36:17.752872] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:01.901 [2024-07-25 11:36:17.752902] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:01.901 request: 00:25:01.901 { 00:25:01.901 "base_bdev": "BaseBdev1", 00:25:01.901 "raid_bdev": "raid_bdev1", 00:25:01.901 "method": "bdev_raid_add_base_bdev", 00:25:01.901 "req_id": 1 00:25:01.901 } 00:25:01.901 Got JSON-RPC error response 00:25:01.901 response: 00:25:01.901 { 00:25:01.901 "code": -22, 00:25:01.901 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:01.901 } 00:25:01.901 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:25:01.901 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:01.901 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:01.901 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:01.901 11:36:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.276 11:36:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.276 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.276 "name": "raid_bdev1", 00:25:03.276 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:25:03.276 "strip_size_kb": 0, 00:25:03.276 "state": "online", 00:25:03.276 "raid_level": "raid1", 00:25:03.276 "superblock": true, 00:25:03.276 "num_base_bdevs": 4, 00:25:03.276 "num_base_bdevs_discovered": 2, 00:25:03.276 "num_base_bdevs_operational": 2, 00:25:03.276 "base_bdevs_list": [ 00:25:03.276 { 00:25:03.276 "name": null, 00:25:03.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.276 "is_configured": false, 00:25:03.276 "data_offset": 2048, 00:25:03.276 "data_size": 63488 00:25:03.276 }, 00:25:03.276 { 00:25:03.276 "name": null, 00:25:03.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.276 "is_configured": false, 00:25:03.276 "data_offset": 2048, 00:25:03.276 "data_size": 63488 00:25:03.276 }, 00:25:03.276 { 00:25:03.276 "name": "BaseBdev3", 00:25:03.276 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:25:03.276 "is_configured": true, 00:25:03.276 "data_offset": 2048, 00:25:03.276 "data_size": 63488 00:25:03.276 }, 00:25:03.276 { 00:25:03.276 "name": "BaseBdev4", 00:25:03.276 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:25:03.276 "is_configured": true, 00:25:03.276 "data_offset": 2048, 00:25:03.276 "data_size": 63488 00:25:03.276 } 00:25:03.276 ] 00:25:03.276 }' 00:25:03.276 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.276 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:03.842 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:03.842 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:03.842 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:03.842 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:03.842 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:03.842 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.842 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.106 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:04.106 "name": "raid_bdev1", 00:25:04.106 "uuid": "23d8e0da-d089-42e3-bba1-3c494de7f961", 00:25:04.106 "strip_size_kb": 0, 00:25:04.106 "state": "online", 00:25:04.106 "raid_level": "raid1", 00:25:04.106 "superblock": true, 00:25:04.106 "num_base_bdevs": 4, 00:25:04.106 "num_base_bdevs_discovered": 2, 00:25:04.107 "num_base_bdevs_operational": 2, 00:25:04.107 "base_bdevs_list": [ 00:25:04.107 { 00:25:04.107 "name": null, 00:25:04.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.107 "is_configured": false, 00:25:04.107 "data_offset": 2048, 00:25:04.107 "data_size": 63488 00:25:04.107 }, 00:25:04.107 { 00:25:04.107 "name": null, 00:25:04.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.107 "is_configured": false, 00:25:04.107 "data_offset": 2048, 00:25:04.107 "data_size": 63488 00:25:04.107 }, 00:25:04.107 { 00:25:04.107 "name": "BaseBdev3", 00:25:04.107 "uuid": "4615b1ed-61cd-57af-995d-762899a0dc0a", 00:25:04.107 "is_configured": true, 00:25:04.107 "data_offset": 2048, 00:25:04.107 "data_size": 63488 00:25:04.107 }, 00:25:04.107 { 00:25:04.107 "name": "BaseBdev4", 00:25:04.107 "uuid": "4f3d6faf-7e61-5d1a-bcf4-b6cba9bee99e", 00:25:04.107 "is_configured": true, 00:25:04.107 "data_offset": 2048, 00:25:04.107 "data_size": 63488 00:25:04.107 } 00:25:04.107 ] 00:25:04.107 }' 00:25:04.107 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:04.107 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:04.107 11:36:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 90505 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 90505 ']' 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 90505 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90505 00:25:04.376 killing process with pid 90505 00:25:04.376 Received shutdown signal, test time was about 29.715057 seconds 00:25:04.376 00:25:04.376 Latency(us) 00:25:04.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.376 =================================================================================================================== 00:25:04.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90505' 00:25:04.376 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 90505 00:25:04.376 [2024-07-25 11:36:20.049502] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:04.377 11:36:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 90505 00:25:04.377 [2024-07-25 11:36:20.049708] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:04.377 [2024-07-25 11:36:20.049821] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:04.377 [2024-07-25 11:36:20.049842] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:04.634 [2024-07-25 11:36:20.429723] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:06.007 ************************************ 00:25:06.007 END TEST raid_rebuild_test_sb_io 00:25:06.007 ************************************ 00:25:06.007 11:36:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:25:06.007 00:25:06.007 real 0m37.294s 00:25:06.007 user 0m59.865s 00:25:06.007 sys 0m4.223s 00:25:06.007 11:36:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:06.007 11:36:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:06.007 11:36:21 bdev_raid -- bdev/bdev_raid.sh@964 -- # for n in {3..4} 00:25:06.007 11:36:21 bdev_raid -- bdev/bdev_raid.sh@965 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:25:06.007 11:36:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:06.007 11:36:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:06.007 11:36:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:06.007 ************************************ 00:25:06.007 START TEST raid5f_state_function_test 00:25:06.007 ************************************ 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:06.007 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:06.008 Process raid pid: 91388 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=91388 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 91388' 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 91388 /var/tmp/spdk-raid.sock 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 91388 ']' 00:25:06.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.008 11:36:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.008 [2024-07-25 11:36:21.823015] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:25:06.008 [2024-07-25 11:36:21.823182] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.266 [2024-07-25 11:36:21.995573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.524 [2024-07-25 11:36:22.233087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.782 [2024-07-25 11:36:22.437516] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:06.782 [2024-07-25 11:36:22.437574] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:07.040 11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.040 11:36:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:25:07.040 11:36:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:07.298 [2024-07-25 11:36:23.011104] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:07.298 [2024-07-25 11:36:23.011418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:07.298 [2024-07-25 11:36:23.011451] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.298 [2024-07-25 11:36:23.011468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.298 [2024-07-25 11:36:23.011486] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:07.298 [2024-07-25 11:36:23.011498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:07.298 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:07.298 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:07.298 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:07.298 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:07.299 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:07.299 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:07.299 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:07.299 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:07.299 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:07.299 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:07.299 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.299 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.637 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:07.637 "name": "Existed_Raid", 00:25:07.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.637 "strip_size_kb": 64, 00:25:07.637 "state": "configuring", 00:25:07.637 "raid_level": "raid5f", 00:25:07.637 "superblock": false, 00:25:07.637 "num_base_bdevs": 3, 00:25:07.637 "num_base_bdevs_discovered": 0, 00:25:07.637 "num_base_bdevs_operational": 3, 00:25:07.637 "base_bdevs_list": [ 00:25:07.637 { 00:25:07.637 "name": "BaseBdev1", 00:25:07.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.637 "is_configured": false, 00:25:07.637 "data_offset": 0, 00:25:07.637 "data_size": 0 00:25:07.637 }, 00:25:07.637 { 00:25:07.637 "name": "BaseBdev2", 00:25:07.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.637 "is_configured": false, 00:25:07.637 "data_offset": 0, 00:25:07.637 "data_size": 0 00:25:07.637 }, 00:25:07.637 { 00:25:07.637 "name": "BaseBdev3", 00:25:07.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.637 "is_configured": false, 00:25:07.637 "data_offset": 0, 00:25:07.637 "data_size": 0 00:25:07.637 } 00:25:07.637 ] 00:25:07.637 }' 00:25:07.637 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:07.637 11:36:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.203 11:36:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:08.460 [2024-07-25 11:36:24.167227] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:08.461 [2024-07-25 11:36:24.167277] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:08.461 11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:08.719 [2024-07-25 11:36:24.459337] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:08.719 [2024-07-25 11:36:24.459403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:08.719 [2024-07-25 11:36:24.459427] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:08.719 [2024-07-25 11:36:24.459441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:08.719 [2024-07-25 11:36:24.459454] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:08.719 [2024-07-25 11:36:24.459465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:08.719 11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:08.977 [2024-07-25 11:36:24.780005] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:08.977 BaseBdev1 00:25:08.977 11:36:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:08.977 11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:08.977 11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:08.977 11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:08.977 11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:08.977 11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:08.977 11:36:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:09.236 11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:09.495 [ 00:25:09.495 { 00:25:09.495 "name": "BaseBdev1", 00:25:09.495 "aliases": [ 00:25:09.495 "80f43456-35c4-4232-b76e-32d14a3e75e4" 00:25:09.495 ], 00:25:09.495 "product_name": "Malloc disk", 00:25:09.495 "block_size": 512, 00:25:09.495 "num_blocks": 65536, 00:25:09.495 "uuid": "80f43456-35c4-4232-b76e-32d14a3e75e4", 00:25:09.495 "assigned_rate_limits": { 00:25:09.495 "rw_ios_per_sec": 0, 00:25:09.495 "rw_mbytes_per_sec": 0, 00:25:09.495 "r_mbytes_per_sec": 0, 00:25:09.495 "w_mbytes_per_sec": 0 00:25:09.495 }, 00:25:09.495 "claimed": true, 00:25:09.495 "claim_type": "exclusive_write", 00:25:09.495 "zoned": false, 00:25:09.495 "supported_io_types": { 00:25:09.495 "read": true, 00:25:09.495 "write": true, 00:25:09.495 "unmap": true, 00:25:09.495 "flush": true, 00:25:09.495 "reset": true, 00:25:09.495 "nvme_admin": false, 00:25:09.495 "nvme_io": false, 00:25:09.495 "nvme_io_md": false, 00:25:09.495 "write_zeroes": true, 00:25:09.495 "zcopy": true, 00:25:09.495 "get_zone_info": false, 00:25:09.495 "zone_management": false, 00:25:09.495 "zone_append": false, 00:25:09.495 "compare": false, 00:25:09.495 "compare_and_write": false, 00:25:09.495 "abort": true, 00:25:09.495 "seek_hole": false, 00:25:09.495 "seek_data": false, 00:25:09.495 "copy": true, 00:25:09.495 "nvme_iov_md": false 00:25:09.495 }, 00:25:09.495 "memory_domains": [ 00:25:09.495 { 00:25:09.495 "dma_device_id": "system", 00:25:09.495 "dma_device_type": 1 00:25:09.495 }, 00:25:09.495 { 00:25:09.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.495 "dma_device_type": 2 00:25:09.495 } 00:25:09.495 ], 00:25:09.495 "driver_specific": {} 00:25:09.495 } 00:25:09.495 ] 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.495 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.794 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.794 "name": "Existed_Raid", 00:25:09.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.794 "strip_size_kb": 64, 00:25:09.794 "state": "configuring", 00:25:09.794 "raid_level": "raid5f", 00:25:09.794 "superblock": false, 00:25:09.794 "num_base_bdevs": 3, 00:25:09.794 "num_base_bdevs_discovered": 1, 00:25:09.794 "num_base_bdevs_operational": 3, 00:25:09.794 "base_bdevs_list": [ 00:25:09.794 { 00:25:09.794 "name": "BaseBdev1", 00:25:09.794 "uuid": "80f43456-35c4-4232-b76e-32d14a3e75e4", 00:25:09.794 "is_configured": true, 00:25:09.794 "data_offset": 0, 00:25:09.794 "data_size": 65536 00:25:09.794 }, 00:25:09.794 { 00:25:09.794 "name": "BaseBdev2", 00:25:09.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.794 "is_configured": false, 00:25:09.794 "data_offset": 0, 00:25:09.794 "data_size": 0 00:25:09.794 }, 00:25:09.794 { 00:25:09.794 "name": "BaseBdev3", 00:25:09.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.794 "is_configured": false, 00:25:09.794 "data_offset": 0, 00:25:09.794 "data_size": 0 00:25:09.794 } 00:25:09.794 ] 00:25:09.794 }' 00:25:09.794 11:36:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.794 11:36:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.361 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:10.619 [2024-07-25 11:36:26.408532] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:10.619 [2024-07-25 11:36:26.408696] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:10.619 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:10.877 [2024-07-25 11:36:26.648708] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:10.877 [2024-07-25 11:36:26.651386] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:10.877 [2024-07-25 11:36:26.651445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:10.877 [2024-07-25 11:36:26.651466] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:10.877 [2024-07-25 11:36:26.651479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.877 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.146 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.146 "name": "Existed_Raid", 00:25:11.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.146 "strip_size_kb": 64, 00:25:11.146 "state": "configuring", 00:25:11.146 "raid_level": "raid5f", 00:25:11.146 "superblock": false, 00:25:11.146 "num_base_bdevs": 3, 00:25:11.146 "num_base_bdevs_discovered": 1, 00:25:11.146 "num_base_bdevs_operational": 3, 00:25:11.146 "base_bdevs_list": [ 00:25:11.146 { 00:25:11.146 "name": "BaseBdev1", 00:25:11.146 "uuid": "80f43456-35c4-4232-b76e-32d14a3e75e4", 00:25:11.146 "is_configured": true, 00:25:11.146 "data_offset": 0, 00:25:11.146 "data_size": 65536 00:25:11.146 }, 00:25:11.146 { 00:25:11.146 "name": "BaseBdev2", 00:25:11.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.146 "is_configured": false, 00:25:11.146 "data_offset": 0, 00:25:11.146 "data_size": 0 00:25:11.146 }, 00:25:11.146 { 00:25:11.146 "name": "BaseBdev3", 00:25:11.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.146 "is_configured": false, 00:25:11.146 "data_offset": 0, 00:25:11.146 "data_size": 0 00:25:11.146 } 00:25:11.146 ] 00:25:11.146 }' 00:25:11.146 11:36:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.146 11:36:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.725 11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:11.983 [2024-07-25 11:36:27.859464] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:11.983 BaseBdev2 00:25:12.241 11:36:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:12.241 11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:12.241 11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:12.241 11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:12.241 11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:12.241 11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:12.241 11:36:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:12.500 11:36:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:12.500 [ 00:25:12.500 { 00:25:12.500 "name": "BaseBdev2", 00:25:12.500 "aliases": [ 00:25:12.500 "84559e86-68c2-4175-8160-ecdf4fdbc1a8" 00:25:12.500 ], 00:25:12.500 "product_name": "Malloc disk", 00:25:12.500 "block_size": 512, 00:25:12.500 "num_blocks": 65536, 00:25:12.500 "uuid": "84559e86-68c2-4175-8160-ecdf4fdbc1a8", 00:25:12.500 "assigned_rate_limits": { 00:25:12.500 "rw_ios_per_sec": 0, 00:25:12.500 "rw_mbytes_per_sec": 0, 00:25:12.500 "r_mbytes_per_sec": 0, 00:25:12.500 "w_mbytes_per_sec": 0 00:25:12.500 }, 00:25:12.500 "claimed": true, 00:25:12.500 "claim_type": "exclusive_write", 00:25:12.500 "zoned": false, 00:25:12.500 "supported_io_types": { 00:25:12.500 "read": true, 00:25:12.500 "write": true, 00:25:12.500 "unmap": true, 00:25:12.500 "flush": true, 00:25:12.500 "reset": true, 00:25:12.500 "nvme_admin": false, 00:25:12.500 "nvme_io": false, 00:25:12.500 "nvme_io_md": false, 00:25:12.500 "write_zeroes": true, 00:25:12.500 "zcopy": true, 00:25:12.500 "get_zone_info": false, 00:25:12.500 "zone_management": false, 00:25:12.500 "zone_append": false, 00:25:12.500 "compare": false, 00:25:12.500 "compare_and_write": false, 00:25:12.500 "abort": true, 00:25:12.500 "seek_hole": false, 00:25:12.500 "seek_data": false, 00:25:12.500 "copy": true, 00:25:12.500 "nvme_iov_md": false 00:25:12.500 }, 00:25:12.500 "memory_domains": [ 00:25:12.500 { 00:25:12.500 "dma_device_id": "system", 00:25:12.500 "dma_device_type": 1 00:25:12.500 }, 00:25:12.500 { 00:25:12.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.500 "dma_device_type": 2 00:25:12.500 } 00:25:12.500 ], 00:25:12.500 "driver_specific": {} 00:25:12.500 } 00:25:12.500 ] 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.759 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.017 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.017 "name": "Existed_Raid", 00:25:13.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.017 "strip_size_kb": 64, 00:25:13.017 "state": "configuring", 00:25:13.017 "raid_level": "raid5f", 00:25:13.017 "superblock": false, 00:25:13.017 "num_base_bdevs": 3, 00:25:13.017 "num_base_bdevs_discovered": 2, 00:25:13.017 "num_base_bdevs_operational": 3, 00:25:13.017 "base_bdevs_list": [ 00:25:13.017 { 00:25:13.017 "name": "BaseBdev1", 00:25:13.017 "uuid": "80f43456-35c4-4232-b76e-32d14a3e75e4", 00:25:13.017 "is_configured": true, 00:25:13.017 "data_offset": 0, 00:25:13.017 "data_size": 65536 00:25:13.017 }, 00:25:13.017 { 00:25:13.017 "name": "BaseBdev2", 00:25:13.017 "uuid": "84559e86-68c2-4175-8160-ecdf4fdbc1a8", 00:25:13.017 "is_configured": true, 00:25:13.017 "data_offset": 0, 00:25:13.017 "data_size": 65536 00:25:13.017 }, 00:25:13.017 { 00:25:13.018 "name": "BaseBdev3", 00:25:13.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.018 "is_configured": false, 00:25:13.018 "data_offset": 0, 00:25:13.018 "data_size": 0 00:25:13.018 } 00:25:13.018 ] 00:25:13.018 }' 00:25:13.018 11:36:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.018 11:36:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.584 11:36:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:13.844 [2024-07-25 11:36:29.650578] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:13.844 [2024-07-25 11:36:29.650735] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:13.844 [2024-07-25 11:36:29.650754] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:13.844 [2024-07-25 11:36:29.651110] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:13.844 [2024-07-25 11:36:29.656440] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:13.844 [2024-07-25 11:36:29.656478] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:13.844 [2024-07-25 11:36:29.656855] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.844 BaseBdev3 00:25:13.844 11:36:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:13.844 11:36:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:13.844 11:36:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:13.844 11:36:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:13.844 11:36:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:13.844 11:36:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:13.844 11:36:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:14.103 11:36:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:14.362 [ 00:25:14.362 { 00:25:14.362 "name": "BaseBdev3", 00:25:14.362 "aliases": [ 00:25:14.362 "ea4e03be-266f-49d1-b98b-0791ff601439" 00:25:14.362 ], 00:25:14.362 "product_name": "Malloc disk", 00:25:14.362 "block_size": 512, 00:25:14.362 "num_blocks": 65536, 00:25:14.362 "uuid": "ea4e03be-266f-49d1-b98b-0791ff601439", 00:25:14.362 "assigned_rate_limits": { 00:25:14.362 "rw_ios_per_sec": 0, 00:25:14.362 "rw_mbytes_per_sec": 0, 00:25:14.362 "r_mbytes_per_sec": 0, 00:25:14.362 "w_mbytes_per_sec": 0 00:25:14.362 }, 00:25:14.362 "claimed": true, 00:25:14.362 "claim_type": "exclusive_write", 00:25:14.362 "zoned": false, 00:25:14.362 "supported_io_types": { 00:25:14.362 "read": true, 00:25:14.362 "write": true, 00:25:14.362 "unmap": true, 00:25:14.362 "flush": true, 00:25:14.362 "reset": true, 00:25:14.362 "nvme_admin": false, 00:25:14.362 "nvme_io": false, 00:25:14.362 "nvme_io_md": false, 00:25:14.363 "write_zeroes": true, 00:25:14.363 "zcopy": true, 00:25:14.363 "get_zone_info": false, 00:25:14.363 "zone_management": false, 00:25:14.363 "zone_append": false, 00:25:14.363 "compare": false, 00:25:14.363 "compare_and_write": false, 00:25:14.363 "abort": true, 00:25:14.363 "seek_hole": false, 00:25:14.363 "seek_data": false, 00:25:14.363 "copy": true, 00:25:14.363 "nvme_iov_md": false 00:25:14.363 }, 00:25:14.363 "memory_domains": [ 00:25:14.363 { 00:25:14.363 "dma_device_id": "system", 00:25:14.363 "dma_device_type": 1 00:25:14.363 }, 00:25:14.363 { 00:25:14.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.363 "dma_device_type": 2 00:25:14.363 } 00:25:14.363 ], 00:25:14.363 "driver_specific": {} 00:25:14.363 } 00:25:14.363 ] 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.363 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.622 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.622 "name": "Existed_Raid", 00:25:14.622 "uuid": "f5e1e6ed-c1e7-49b9-9059-6a4672d26fef", 00:25:14.622 "strip_size_kb": 64, 00:25:14.622 "state": "online", 00:25:14.622 "raid_level": "raid5f", 00:25:14.622 "superblock": false, 00:25:14.622 "num_base_bdevs": 3, 00:25:14.622 "num_base_bdevs_discovered": 3, 00:25:14.622 "num_base_bdevs_operational": 3, 00:25:14.622 "base_bdevs_list": [ 00:25:14.622 { 00:25:14.622 "name": "BaseBdev1", 00:25:14.622 "uuid": "80f43456-35c4-4232-b76e-32d14a3e75e4", 00:25:14.622 "is_configured": true, 00:25:14.622 "data_offset": 0, 00:25:14.622 "data_size": 65536 00:25:14.622 }, 00:25:14.622 { 00:25:14.622 "name": "BaseBdev2", 00:25:14.622 "uuid": "84559e86-68c2-4175-8160-ecdf4fdbc1a8", 00:25:14.622 "is_configured": true, 00:25:14.622 "data_offset": 0, 00:25:14.622 "data_size": 65536 00:25:14.622 }, 00:25:14.622 { 00:25:14.622 "name": "BaseBdev3", 00:25:14.622 "uuid": "ea4e03be-266f-49d1-b98b-0791ff601439", 00:25:14.622 "is_configured": true, 00:25:14.622 "data_offset": 0, 00:25:14.622 "data_size": 65536 00:25:14.622 } 00:25:14.622 ] 00:25:14.622 }' 00:25:14.622 11:36:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.622 11:36:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.255 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:15.255 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:15.255 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:15.255 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:15.255 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:15.255 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:15.255 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:15.255 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:15.513 [2024-07-25 11:36:31.323857] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:15.513 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:15.513 "name": "Existed_Raid", 00:25:15.513 "aliases": [ 00:25:15.513 "f5e1e6ed-c1e7-49b9-9059-6a4672d26fef" 00:25:15.513 ], 00:25:15.513 "product_name": "Raid Volume", 00:25:15.513 "block_size": 512, 00:25:15.513 "num_blocks": 131072, 00:25:15.513 "uuid": "f5e1e6ed-c1e7-49b9-9059-6a4672d26fef", 00:25:15.513 "assigned_rate_limits": { 00:25:15.513 "rw_ios_per_sec": 0, 00:25:15.513 "rw_mbytes_per_sec": 0, 00:25:15.513 "r_mbytes_per_sec": 0, 00:25:15.513 "w_mbytes_per_sec": 0 00:25:15.513 }, 00:25:15.513 "claimed": false, 00:25:15.513 "zoned": false, 00:25:15.513 "supported_io_types": { 00:25:15.513 "read": true, 00:25:15.513 "write": true, 00:25:15.513 "unmap": false, 00:25:15.513 "flush": false, 00:25:15.513 "reset": true, 00:25:15.513 "nvme_admin": false, 00:25:15.513 "nvme_io": false, 00:25:15.513 "nvme_io_md": false, 00:25:15.513 "write_zeroes": true, 00:25:15.513 "zcopy": false, 00:25:15.513 "get_zone_info": false, 00:25:15.513 "zone_management": false, 00:25:15.513 "zone_append": false, 00:25:15.513 "compare": false, 00:25:15.513 "compare_and_write": false, 00:25:15.513 "abort": false, 00:25:15.513 "seek_hole": false, 00:25:15.513 "seek_data": false, 00:25:15.513 "copy": false, 00:25:15.513 "nvme_iov_md": false 00:25:15.513 }, 00:25:15.513 "driver_specific": { 00:25:15.513 "raid": { 00:25:15.513 "uuid": "f5e1e6ed-c1e7-49b9-9059-6a4672d26fef", 00:25:15.513 "strip_size_kb": 64, 00:25:15.513 "state": "online", 00:25:15.513 "raid_level": "raid5f", 00:25:15.513 "superblock": false, 00:25:15.513 "num_base_bdevs": 3, 00:25:15.513 "num_base_bdevs_discovered": 3, 00:25:15.513 "num_base_bdevs_operational": 3, 00:25:15.513 "base_bdevs_list": [ 00:25:15.513 { 00:25:15.513 "name": "BaseBdev1", 00:25:15.513 "uuid": "80f43456-35c4-4232-b76e-32d14a3e75e4", 00:25:15.513 "is_configured": true, 00:25:15.513 "data_offset": 0, 00:25:15.513 "data_size": 65536 00:25:15.513 }, 00:25:15.513 { 00:25:15.513 "name": "BaseBdev2", 00:25:15.513 "uuid": "84559e86-68c2-4175-8160-ecdf4fdbc1a8", 00:25:15.513 "is_configured": true, 00:25:15.513 "data_offset": 0, 00:25:15.513 "data_size": 65536 00:25:15.513 }, 00:25:15.513 { 00:25:15.513 "name": "BaseBdev3", 00:25:15.513 "uuid": "ea4e03be-266f-49d1-b98b-0791ff601439", 00:25:15.513 "is_configured": true, 00:25:15.513 "data_offset": 0, 00:25:15.513 "data_size": 65536 00:25:15.513 } 00:25:15.513 ] 00:25:15.513 } 00:25:15.513 } 00:25:15.513 }' 00:25:15.513 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:15.773 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:15.773 BaseBdev2 00:25:15.773 BaseBdev3' 00:25:15.773 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:15.773 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:15.773 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:15.773 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:15.773 "name": "BaseBdev1", 00:25:15.773 "aliases": [ 00:25:15.773 "80f43456-35c4-4232-b76e-32d14a3e75e4" 00:25:15.773 ], 00:25:15.773 "product_name": "Malloc disk", 00:25:15.773 "block_size": 512, 00:25:15.773 "num_blocks": 65536, 00:25:15.773 "uuid": "80f43456-35c4-4232-b76e-32d14a3e75e4", 00:25:15.773 "assigned_rate_limits": { 00:25:15.773 "rw_ios_per_sec": 0, 00:25:15.773 "rw_mbytes_per_sec": 0, 00:25:15.773 "r_mbytes_per_sec": 0, 00:25:15.773 "w_mbytes_per_sec": 0 00:25:15.773 }, 00:25:15.773 "claimed": true, 00:25:15.773 "claim_type": "exclusive_write", 00:25:15.773 "zoned": false, 00:25:15.773 "supported_io_types": { 00:25:15.773 "read": true, 00:25:15.773 "write": true, 00:25:15.773 "unmap": true, 00:25:15.773 "flush": true, 00:25:15.773 "reset": true, 00:25:15.773 "nvme_admin": false, 00:25:15.773 "nvme_io": false, 00:25:15.773 "nvme_io_md": false, 00:25:15.773 "write_zeroes": true, 00:25:15.773 "zcopy": true, 00:25:15.773 "get_zone_info": false, 00:25:15.773 "zone_management": false, 00:25:15.773 "zone_append": false, 00:25:15.773 "compare": false, 00:25:15.773 "compare_and_write": false, 00:25:15.773 "abort": true, 00:25:15.773 "seek_hole": false, 00:25:15.773 "seek_data": false, 00:25:15.773 "copy": true, 00:25:15.773 "nvme_iov_md": false 00:25:15.773 }, 00:25:15.773 "memory_domains": [ 00:25:15.773 { 00:25:15.773 "dma_device_id": "system", 00:25:15.773 "dma_device_type": 1 00:25:15.773 }, 00:25:15.773 { 00:25:15.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.773 "dma_device_type": 2 00:25:15.773 } 00:25:15.773 ], 00:25:15.773 "driver_specific": {} 00:25:15.773 }' 00:25:15.773 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:16.032 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:16.032 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:16.032 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:16.032 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:16.032 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:16.032 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:16.032 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:16.290 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:16.290 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:16.290 11:36:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:16.290 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:16.290 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:16.290 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:16.290 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:16.548 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:16.548 "name": "BaseBdev2", 00:25:16.548 "aliases": [ 00:25:16.548 "84559e86-68c2-4175-8160-ecdf4fdbc1a8" 00:25:16.548 ], 00:25:16.548 "product_name": "Malloc disk", 00:25:16.548 "block_size": 512, 00:25:16.548 "num_blocks": 65536, 00:25:16.548 "uuid": "84559e86-68c2-4175-8160-ecdf4fdbc1a8", 00:25:16.548 "assigned_rate_limits": { 00:25:16.548 "rw_ios_per_sec": 0, 00:25:16.548 "rw_mbytes_per_sec": 0, 00:25:16.548 "r_mbytes_per_sec": 0, 00:25:16.548 "w_mbytes_per_sec": 0 00:25:16.548 }, 00:25:16.548 "claimed": true, 00:25:16.548 "claim_type": "exclusive_write", 00:25:16.548 "zoned": false, 00:25:16.548 "supported_io_types": { 00:25:16.548 "read": true, 00:25:16.548 "write": true, 00:25:16.548 "unmap": true, 00:25:16.548 "flush": true, 00:25:16.548 "reset": true, 00:25:16.548 "nvme_admin": false, 00:25:16.548 "nvme_io": false, 00:25:16.548 "nvme_io_md": false, 00:25:16.548 "write_zeroes": true, 00:25:16.548 "zcopy": true, 00:25:16.548 "get_zone_info": false, 00:25:16.548 "zone_management": false, 00:25:16.548 "zone_append": false, 00:25:16.548 "compare": false, 00:25:16.548 "compare_and_write": false, 00:25:16.548 "abort": true, 00:25:16.548 "seek_hole": false, 00:25:16.548 "seek_data": false, 00:25:16.548 "copy": true, 00:25:16.548 "nvme_iov_md": false 00:25:16.548 }, 00:25:16.548 "memory_domains": [ 00:25:16.548 { 00:25:16.548 "dma_device_id": "system", 00:25:16.548 "dma_device_type": 1 00:25:16.548 }, 00:25:16.548 { 00:25:16.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.548 "dma_device_type": 2 00:25:16.548 } 00:25:16.548 ], 00:25:16.548 "driver_specific": {} 00:25:16.548 }' 00:25:16.548 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:16.548 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:16.548 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:16.548 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:16.807 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:16.807 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:16.807 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:16.807 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:16.807 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:16.807 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:16.807 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:17.065 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:17.065 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:17.065 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:17.065 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:17.324 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:17.324 "name": "BaseBdev3", 00:25:17.324 "aliases": [ 00:25:17.324 "ea4e03be-266f-49d1-b98b-0791ff601439" 00:25:17.324 ], 00:25:17.324 "product_name": "Malloc disk", 00:25:17.324 "block_size": 512, 00:25:17.324 "num_blocks": 65536, 00:25:17.324 "uuid": "ea4e03be-266f-49d1-b98b-0791ff601439", 00:25:17.324 "assigned_rate_limits": { 00:25:17.324 "rw_ios_per_sec": 0, 00:25:17.324 "rw_mbytes_per_sec": 0, 00:25:17.324 "r_mbytes_per_sec": 0, 00:25:17.324 "w_mbytes_per_sec": 0 00:25:17.324 }, 00:25:17.324 "claimed": true, 00:25:17.324 "claim_type": "exclusive_write", 00:25:17.324 "zoned": false, 00:25:17.324 "supported_io_types": { 00:25:17.324 "read": true, 00:25:17.324 "write": true, 00:25:17.324 "unmap": true, 00:25:17.324 "flush": true, 00:25:17.324 "reset": true, 00:25:17.324 "nvme_admin": false, 00:25:17.324 "nvme_io": false, 00:25:17.324 "nvme_io_md": false, 00:25:17.324 "write_zeroes": true, 00:25:17.324 "zcopy": true, 00:25:17.324 "get_zone_info": false, 00:25:17.324 "zone_management": false, 00:25:17.324 "zone_append": false, 00:25:17.324 "compare": false, 00:25:17.324 "compare_and_write": false, 00:25:17.324 "abort": true, 00:25:17.324 "seek_hole": false, 00:25:17.324 "seek_data": false, 00:25:17.324 "copy": true, 00:25:17.324 "nvme_iov_md": false 00:25:17.324 }, 00:25:17.324 "memory_domains": [ 00:25:17.324 { 00:25:17.324 "dma_device_id": "system", 00:25:17.324 "dma_device_type": 1 00:25:17.324 }, 00:25:17.324 { 00:25:17.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.324 "dma_device_type": 2 00:25:17.324 } 00:25:17.324 ], 00:25:17.324 "driver_specific": {} 00:25:17.324 }' 00:25:17.324 11:36:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:17.324 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:17.324 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:17.324 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:17.324 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:17.324 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:17.324 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:17.583 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:17.583 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:17.583 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:17.583 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:17.583 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:17.583 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:17.841 [2024-07-25 11:36:33.668236] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.116 11:36:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.394 11:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:18.394 "name": "Existed_Raid", 00:25:18.394 "uuid": "f5e1e6ed-c1e7-49b9-9059-6a4672d26fef", 00:25:18.394 "strip_size_kb": 64, 00:25:18.394 "state": "online", 00:25:18.394 "raid_level": "raid5f", 00:25:18.394 "superblock": false, 00:25:18.394 "num_base_bdevs": 3, 00:25:18.394 "num_base_bdevs_discovered": 2, 00:25:18.394 "num_base_bdevs_operational": 2, 00:25:18.394 "base_bdevs_list": [ 00:25:18.394 { 00:25:18.394 "name": null, 00:25:18.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.394 "is_configured": false, 00:25:18.394 "data_offset": 0, 00:25:18.394 "data_size": 65536 00:25:18.394 }, 00:25:18.394 { 00:25:18.394 "name": "BaseBdev2", 00:25:18.394 "uuid": "84559e86-68c2-4175-8160-ecdf4fdbc1a8", 00:25:18.394 "is_configured": true, 00:25:18.394 "data_offset": 0, 00:25:18.394 "data_size": 65536 00:25:18.394 }, 00:25:18.394 { 00:25:18.394 "name": "BaseBdev3", 00:25:18.394 "uuid": "ea4e03be-266f-49d1-b98b-0791ff601439", 00:25:18.394 "is_configured": true, 00:25:18.394 "data_offset": 0, 00:25:18.394 "data_size": 65536 00:25:18.394 } 00:25:18.394 ] 00:25:18.394 }' 00:25:18.394 11:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:18.394 11:36:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.969 11:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:18.969 11:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:18.969 11:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.969 11:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:19.228 11:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:19.228 11:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:19.228 11:36:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:19.487 [2024-07-25 11:36:35.179184] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:19.487 [2024-07-25 11:36:35.179323] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:19.487 [2024-07-25 11:36:35.265330] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:19.487 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:19.487 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:19.487 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.487 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:19.744 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:19.744 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:19.744 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:20.002 [2024-07-25 11:36:35.793497] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:20.002 [2024-07-25 11:36:35.793583] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:20.260 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:20.260 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:20.260 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.260 11:36:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:20.518 11:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:20.518 11:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:20.518 11:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:25:20.518 11:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:20.518 11:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:20.518 11:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:20.776 BaseBdev2 00:25:20.776 11:36:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:20.776 11:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:20.776 11:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:20.776 11:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:20.776 11:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:20.776 11:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:20.776 11:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:21.034 11:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:21.293 [ 00:25:21.293 { 00:25:21.293 "name": "BaseBdev2", 00:25:21.293 "aliases": [ 00:25:21.293 "04c0dfd1-51d1-47c2-8fd9-79c9913772b0" 00:25:21.293 ], 00:25:21.293 "product_name": "Malloc disk", 00:25:21.293 "block_size": 512, 00:25:21.293 "num_blocks": 65536, 00:25:21.293 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:21.293 "assigned_rate_limits": { 00:25:21.293 "rw_ios_per_sec": 0, 00:25:21.293 "rw_mbytes_per_sec": 0, 00:25:21.293 "r_mbytes_per_sec": 0, 00:25:21.293 "w_mbytes_per_sec": 0 00:25:21.293 }, 00:25:21.293 "claimed": false, 00:25:21.293 "zoned": false, 00:25:21.293 "supported_io_types": { 00:25:21.293 "read": true, 00:25:21.293 "write": true, 00:25:21.293 "unmap": true, 00:25:21.293 "flush": true, 00:25:21.293 "reset": true, 00:25:21.293 "nvme_admin": false, 00:25:21.293 "nvme_io": false, 00:25:21.293 "nvme_io_md": false, 00:25:21.293 "write_zeroes": true, 00:25:21.293 "zcopy": true, 00:25:21.293 "get_zone_info": false, 00:25:21.293 "zone_management": false, 00:25:21.293 "zone_append": false, 00:25:21.293 "compare": false, 00:25:21.293 "compare_and_write": false, 00:25:21.293 "abort": true, 00:25:21.293 "seek_hole": false, 00:25:21.293 "seek_data": false, 00:25:21.293 "copy": true, 00:25:21.293 "nvme_iov_md": false 00:25:21.293 }, 00:25:21.293 "memory_domains": [ 00:25:21.293 { 00:25:21.293 "dma_device_id": "system", 00:25:21.293 "dma_device_type": 1 00:25:21.293 }, 00:25:21.293 { 00:25:21.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.293 "dma_device_type": 2 00:25:21.293 } 00:25:21.293 ], 00:25:21.293 "driver_specific": {} 00:25:21.293 } 00:25:21.293 ] 00:25:21.293 11:36:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:21.293 11:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:21.293 11:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:21.293 11:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:21.552 BaseBdev3 00:25:21.552 11:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:21.552 11:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:21.552 11:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:21.552 11:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:21.552 11:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:21.552 11:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:21.552 11:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:21.810 11:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:22.069 [ 00:25:22.069 { 00:25:22.069 "name": "BaseBdev3", 00:25:22.069 "aliases": [ 00:25:22.069 "ebd972ab-530c-4184-b47c-596c05efa465" 00:25:22.069 ], 00:25:22.069 "product_name": "Malloc disk", 00:25:22.069 "block_size": 512, 00:25:22.069 "num_blocks": 65536, 00:25:22.069 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:22.069 "assigned_rate_limits": { 00:25:22.069 "rw_ios_per_sec": 0, 00:25:22.069 "rw_mbytes_per_sec": 0, 00:25:22.069 "r_mbytes_per_sec": 0, 00:25:22.069 "w_mbytes_per_sec": 0 00:25:22.069 }, 00:25:22.069 "claimed": false, 00:25:22.069 "zoned": false, 00:25:22.069 "supported_io_types": { 00:25:22.069 "read": true, 00:25:22.069 "write": true, 00:25:22.069 "unmap": true, 00:25:22.069 "flush": true, 00:25:22.069 "reset": true, 00:25:22.069 "nvme_admin": false, 00:25:22.069 "nvme_io": false, 00:25:22.069 "nvme_io_md": false, 00:25:22.069 "write_zeroes": true, 00:25:22.069 "zcopy": true, 00:25:22.069 "get_zone_info": false, 00:25:22.069 "zone_management": false, 00:25:22.069 "zone_append": false, 00:25:22.069 "compare": false, 00:25:22.069 "compare_and_write": false, 00:25:22.069 "abort": true, 00:25:22.069 "seek_hole": false, 00:25:22.069 "seek_data": false, 00:25:22.069 "copy": true, 00:25:22.069 "nvme_iov_md": false 00:25:22.069 }, 00:25:22.069 "memory_domains": [ 00:25:22.069 { 00:25:22.069 "dma_device_id": "system", 00:25:22.069 "dma_device_type": 1 00:25:22.069 }, 00:25:22.069 { 00:25:22.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.069 "dma_device_type": 2 00:25:22.069 } 00:25:22.069 ], 00:25:22.069 "driver_specific": {} 00:25:22.069 } 00:25:22.069 ] 00:25:22.069 11:36:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:22.069 11:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:22.069 11:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:22.069 11:36:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:22.327 [2024-07-25 11:36:38.012665] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:22.327 [2024-07-25 11:36:38.012740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:22.327 [2024-07-25 11:36:38.012803] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:22.327 [2024-07-25 11:36:38.015396] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.327 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.585 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.585 "name": "Existed_Raid", 00:25:22.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.585 "strip_size_kb": 64, 00:25:22.585 "state": "configuring", 00:25:22.585 "raid_level": "raid5f", 00:25:22.585 "superblock": false, 00:25:22.585 "num_base_bdevs": 3, 00:25:22.585 "num_base_bdevs_discovered": 2, 00:25:22.585 "num_base_bdevs_operational": 3, 00:25:22.585 "base_bdevs_list": [ 00:25:22.585 { 00:25:22.585 "name": "BaseBdev1", 00:25:22.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.585 "is_configured": false, 00:25:22.585 "data_offset": 0, 00:25:22.585 "data_size": 0 00:25:22.585 }, 00:25:22.585 { 00:25:22.585 "name": "BaseBdev2", 00:25:22.585 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:22.585 "is_configured": true, 00:25:22.585 "data_offset": 0, 00:25:22.585 "data_size": 65536 00:25:22.585 }, 00:25:22.585 { 00:25:22.585 "name": "BaseBdev3", 00:25:22.585 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:22.585 "is_configured": true, 00:25:22.585 "data_offset": 0, 00:25:22.585 "data_size": 65536 00:25:22.585 } 00:25:22.585 ] 00:25:22.585 }' 00:25:22.585 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.585 11:36:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.150 11:36:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:23.408 [2024-07-25 11:36:39.140994] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.408 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.666 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:23.666 "name": "Existed_Raid", 00:25:23.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.666 "strip_size_kb": 64, 00:25:23.666 "state": "configuring", 00:25:23.666 "raid_level": "raid5f", 00:25:23.666 "superblock": false, 00:25:23.666 "num_base_bdevs": 3, 00:25:23.666 "num_base_bdevs_discovered": 1, 00:25:23.666 "num_base_bdevs_operational": 3, 00:25:23.666 "base_bdevs_list": [ 00:25:23.666 { 00:25:23.666 "name": "BaseBdev1", 00:25:23.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.666 "is_configured": false, 00:25:23.666 "data_offset": 0, 00:25:23.666 "data_size": 0 00:25:23.666 }, 00:25:23.666 { 00:25:23.666 "name": null, 00:25:23.666 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:23.666 "is_configured": false, 00:25:23.666 "data_offset": 0, 00:25:23.666 "data_size": 65536 00:25:23.666 }, 00:25:23.666 { 00:25:23.666 "name": "BaseBdev3", 00:25:23.666 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:23.666 "is_configured": true, 00:25:23.666 "data_offset": 0, 00:25:23.666 "data_size": 65536 00:25:23.666 } 00:25:23.666 ] 00:25:23.666 }' 00:25:23.666 11:36:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:23.666 11:36:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.231 11:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.231 11:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:24.526 11:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:24.526 11:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:24.784 [2024-07-25 11:36:40.605285] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:24.784 BaseBdev1 00:25:24.784 11:36:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:24.784 11:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:24.784 11:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:24.784 11:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:24.784 11:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:24.784 11:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:24.784 11:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:25.042 11:36:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:25.300 [ 00:25:25.300 { 00:25:25.300 "name": "BaseBdev1", 00:25:25.300 "aliases": [ 00:25:25.300 "6c1fa7a8-545e-4944-89ca-5b91a7258b45" 00:25:25.300 ], 00:25:25.300 "product_name": "Malloc disk", 00:25:25.300 "block_size": 512, 00:25:25.300 "num_blocks": 65536, 00:25:25.300 "uuid": "6c1fa7a8-545e-4944-89ca-5b91a7258b45", 00:25:25.300 "assigned_rate_limits": { 00:25:25.300 "rw_ios_per_sec": 0, 00:25:25.300 "rw_mbytes_per_sec": 0, 00:25:25.300 "r_mbytes_per_sec": 0, 00:25:25.300 "w_mbytes_per_sec": 0 00:25:25.300 }, 00:25:25.300 "claimed": true, 00:25:25.300 "claim_type": "exclusive_write", 00:25:25.300 "zoned": false, 00:25:25.300 "supported_io_types": { 00:25:25.300 "read": true, 00:25:25.300 "write": true, 00:25:25.300 "unmap": true, 00:25:25.300 "flush": true, 00:25:25.300 "reset": true, 00:25:25.300 "nvme_admin": false, 00:25:25.300 "nvme_io": false, 00:25:25.300 "nvme_io_md": false, 00:25:25.300 "write_zeroes": true, 00:25:25.300 "zcopy": true, 00:25:25.300 "get_zone_info": false, 00:25:25.300 "zone_management": false, 00:25:25.300 "zone_append": false, 00:25:25.301 "compare": false, 00:25:25.301 "compare_and_write": false, 00:25:25.301 "abort": true, 00:25:25.301 "seek_hole": false, 00:25:25.301 "seek_data": false, 00:25:25.301 "copy": true, 00:25:25.301 "nvme_iov_md": false 00:25:25.301 }, 00:25:25.301 "memory_domains": [ 00:25:25.301 { 00:25:25.301 "dma_device_id": "system", 00:25:25.301 "dma_device_type": 1 00:25:25.301 }, 00:25:25.301 { 00:25:25.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:25.301 "dma_device_type": 2 00:25:25.301 } 00:25:25.301 ], 00:25:25.301 "driver_specific": {} 00:25:25.301 } 00:25:25.301 ] 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.301 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.558 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.558 "name": "Existed_Raid", 00:25:25.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.559 "strip_size_kb": 64, 00:25:25.559 "state": "configuring", 00:25:25.559 "raid_level": "raid5f", 00:25:25.559 "superblock": false, 00:25:25.559 "num_base_bdevs": 3, 00:25:25.559 "num_base_bdevs_discovered": 2, 00:25:25.559 "num_base_bdevs_operational": 3, 00:25:25.559 "base_bdevs_list": [ 00:25:25.559 { 00:25:25.559 "name": "BaseBdev1", 00:25:25.559 "uuid": "6c1fa7a8-545e-4944-89ca-5b91a7258b45", 00:25:25.559 "is_configured": true, 00:25:25.559 "data_offset": 0, 00:25:25.559 "data_size": 65536 00:25:25.559 }, 00:25:25.559 { 00:25:25.559 "name": null, 00:25:25.559 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:25.559 "is_configured": false, 00:25:25.559 "data_offset": 0, 00:25:25.559 "data_size": 65536 00:25:25.559 }, 00:25:25.559 { 00:25:25.559 "name": "BaseBdev3", 00:25:25.559 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:25.559 "is_configured": true, 00:25:25.559 "data_offset": 0, 00:25:25.559 "data_size": 65536 00:25:25.559 } 00:25:25.559 ] 00:25:25.559 }' 00:25:25.559 11:36:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.559 11:36:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.494 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:26.494 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.494 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:26.494 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:26.753 [2024-07-25 11:36:42.533932] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.753 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:27.011 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:27.011 "name": "Existed_Raid", 00:25:27.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:27.011 "strip_size_kb": 64, 00:25:27.011 "state": "configuring", 00:25:27.011 "raid_level": "raid5f", 00:25:27.011 "superblock": false, 00:25:27.011 "num_base_bdevs": 3, 00:25:27.011 "num_base_bdevs_discovered": 1, 00:25:27.011 "num_base_bdevs_operational": 3, 00:25:27.011 "base_bdevs_list": [ 00:25:27.011 { 00:25:27.011 "name": "BaseBdev1", 00:25:27.011 "uuid": "6c1fa7a8-545e-4944-89ca-5b91a7258b45", 00:25:27.011 "is_configured": true, 00:25:27.011 "data_offset": 0, 00:25:27.011 "data_size": 65536 00:25:27.011 }, 00:25:27.011 { 00:25:27.011 "name": null, 00:25:27.011 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:27.011 "is_configured": false, 00:25:27.011 "data_offset": 0, 00:25:27.011 "data_size": 65536 00:25:27.011 }, 00:25:27.011 { 00:25:27.011 "name": null, 00:25:27.011 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:27.011 "is_configured": false, 00:25:27.011 "data_offset": 0, 00:25:27.011 "data_size": 65536 00:25:27.011 } 00:25:27.011 ] 00:25:27.011 }' 00:25:27.011 11:36:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:27.011 11:36:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.584 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:27.584 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.845 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:27.845 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:28.103 [2024-07-25 11:36:43.962270] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:28.103 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:28.103 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:28.103 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:28.103 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:28.103 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:28.103 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:28.103 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:28.103 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:28.103 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:28.103 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:28.361 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.361 11:36:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.361 11:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.361 "name": "Existed_Raid", 00:25:28.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.361 "strip_size_kb": 64, 00:25:28.361 "state": "configuring", 00:25:28.361 "raid_level": "raid5f", 00:25:28.361 "superblock": false, 00:25:28.361 "num_base_bdevs": 3, 00:25:28.361 "num_base_bdevs_discovered": 2, 00:25:28.361 "num_base_bdevs_operational": 3, 00:25:28.361 "base_bdevs_list": [ 00:25:28.361 { 00:25:28.361 "name": "BaseBdev1", 00:25:28.361 "uuid": "6c1fa7a8-545e-4944-89ca-5b91a7258b45", 00:25:28.361 "is_configured": true, 00:25:28.361 "data_offset": 0, 00:25:28.361 "data_size": 65536 00:25:28.361 }, 00:25:28.361 { 00:25:28.361 "name": null, 00:25:28.361 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:28.361 "is_configured": false, 00:25:28.361 "data_offset": 0, 00:25:28.361 "data_size": 65536 00:25:28.361 }, 00:25:28.361 { 00:25:28.361 "name": "BaseBdev3", 00:25:28.361 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:28.361 "is_configured": true, 00:25:28.361 "data_offset": 0, 00:25:28.361 "data_size": 65536 00:25:28.361 } 00:25:28.361 ] 00:25:28.361 }' 00:25:28.361 11:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.361 11:36:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.295 11:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.295 11:36:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:29.295 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:29.295 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:29.553 [2024-07-25 11:36:45.414763] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.811 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.068 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:30.068 "name": "Existed_Raid", 00:25:30.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.068 "strip_size_kb": 64, 00:25:30.068 "state": "configuring", 00:25:30.069 "raid_level": "raid5f", 00:25:30.069 "superblock": false, 00:25:30.069 "num_base_bdevs": 3, 00:25:30.069 "num_base_bdevs_discovered": 1, 00:25:30.069 "num_base_bdevs_operational": 3, 00:25:30.069 "base_bdevs_list": [ 00:25:30.069 { 00:25:30.069 "name": null, 00:25:30.069 "uuid": "6c1fa7a8-545e-4944-89ca-5b91a7258b45", 00:25:30.069 "is_configured": false, 00:25:30.069 "data_offset": 0, 00:25:30.069 "data_size": 65536 00:25:30.069 }, 00:25:30.069 { 00:25:30.069 "name": null, 00:25:30.069 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:30.069 "is_configured": false, 00:25:30.069 "data_offset": 0, 00:25:30.069 "data_size": 65536 00:25:30.069 }, 00:25:30.069 { 00:25:30.069 "name": "BaseBdev3", 00:25:30.069 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:30.069 "is_configured": true, 00:25:30.069 "data_offset": 0, 00:25:30.069 "data_size": 65536 00:25:30.069 } 00:25:30.069 ] 00:25:30.069 }' 00:25:30.069 11:36:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:30.069 11:36:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.635 11:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.635 11:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:31.201 11:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:31.201 11:36:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:31.201 [2024-07-25 11:36:47.053682] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.201 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.766 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:31.766 "name": "Existed_Raid", 00:25:31.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.766 "strip_size_kb": 64, 00:25:31.766 "state": "configuring", 00:25:31.766 "raid_level": "raid5f", 00:25:31.766 "superblock": false, 00:25:31.766 "num_base_bdevs": 3, 00:25:31.766 "num_base_bdevs_discovered": 2, 00:25:31.766 "num_base_bdevs_operational": 3, 00:25:31.766 "base_bdevs_list": [ 00:25:31.766 { 00:25:31.766 "name": null, 00:25:31.766 "uuid": "6c1fa7a8-545e-4944-89ca-5b91a7258b45", 00:25:31.766 "is_configured": false, 00:25:31.766 "data_offset": 0, 00:25:31.766 "data_size": 65536 00:25:31.766 }, 00:25:31.766 { 00:25:31.766 "name": "BaseBdev2", 00:25:31.766 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:31.766 "is_configured": true, 00:25:31.766 "data_offset": 0, 00:25:31.766 "data_size": 65536 00:25:31.766 }, 00:25:31.766 { 00:25:31.766 "name": "BaseBdev3", 00:25:31.766 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:31.766 "is_configured": true, 00:25:31.766 "data_offset": 0, 00:25:31.766 "data_size": 65536 00:25:31.766 } 00:25:31.766 ] 00:25:31.766 }' 00:25:31.766 11:36:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:31.766 11:36:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.332 11:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.332 11:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:32.592 11:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:32.592 11:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:32.592 11:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.850 11:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6c1fa7a8-545e-4944-89ca-5b91a7258b45 00:25:33.108 [2024-07-25 11:36:48.947746] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:33.108 [2024-07-25 11:36:48.947836] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:33.108 [2024-07-25 11:36:48.947853] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:33.108 [2024-07-25 11:36:48.948207] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:33.108 NewBaseBdev 00:25:33.108 [2024-07-25 11:36:48.953426] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:33.108 [2024-07-25 11:36:48.953458] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:33.108 [2024-07-25 11:36:48.953786] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.108 11:36:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:33.108 11:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:25:33.108 11:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:33.108 11:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:33.108 11:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:33.108 11:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:33.108 11:36:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:33.677 [ 00:25:33.677 { 00:25:33.677 "name": "NewBaseBdev", 00:25:33.677 "aliases": [ 00:25:33.677 "6c1fa7a8-545e-4944-89ca-5b91a7258b45" 00:25:33.677 ], 00:25:33.677 "product_name": "Malloc disk", 00:25:33.677 "block_size": 512, 00:25:33.677 "num_blocks": 65536, 00:25:33.677 "uuid": "6c1fa7a8-545e-4944-89ca-5b91a7258b45", 00:25:33.677 "assigned_rate_limits": { 00:25:33.677 "rw_ios_per_sec": 0, 00:25:33.677 "rw_mbytes_per_sec": 0, 00:25:33.677 "r_mbytes_per_sec": 0, 00:25:33.677 "w_mbytes_per_sec": 0 00:25:33.677 }, 00:25:33.677 "claimed": true, 00:25:33.677 "claim_type": "exclusive_write", 00:25:33.677 "zoned": false, 00:25:33.677 "supported_io_types": { 00:25:33.677 "read": true, 00:25:33.677 "write": true, 00:25:33.677 "unmap": true, 00:25:33.677 "flush": true, 00:25:33.677 "reset": true, 00:25:33.677 "nvme_admin": false, 00:25:33.677 "nvme_io": false, 00:25:33.677 "nvme_io_md": false, 00:25:33.677 "write_zeroes": true, 00:25:33.677 "zcopy": true, 00:25:33.677 "get_zone_info": false, 00:25:33.677 "zone_management": false, 00:25:33.677 "zone_append": false, 00:25:33.677 "compare": false, 00:25:33.677 "compare_and_write": false, 00:25:33.677 "abort": true, 00:25:33.677 "seek_hole": false, 00:25:33.677 "seek_data": false, 00:25:33.677 "copy": true, 00:25:33.677 "nvme_iov_md": false 00:25:33.677 }, 00:25:33.677 "memory_domains": [ 00:25:33.677 { 00:25:33.677 "dma_device_id": "system", 00:25:33.677 "dma_device_type": 1 00:25:33.677 }, 00:25:33.677 { 00:25:33.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.677 "dma_device_type": 2 00:25:33.677 } 00:25:33.677 ], 00:25:33.677 "driver_specific": {} 00:25:33.677 } 00:25:33.677 ] 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.677 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:34.244 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:34.244 "name": "Existed_Raid", 00:25:34.244 "uuid": "817da327-0233-4048-91fb-dc4d49e4b039", 00:25:34.244 "strip_size_kb": 64, 00:25:34.244 "state": "online", 00:25:34.244 "raid_level": "raid5f", 00:25:34.244 "superblock": false, 00:25:34.244 "num_base_bdevs": 3, 00:25:34.244 "num_base_bdevs_discovered": 3, 00:25:34.244 "num_base_bdevs_operational": 3, 00:25:34.244 "base_bdevs_list": [ 00:25:34.244 { 00:25:34.244 "name": "NewBaseBdev", 00:25:34.244 "uuid": "6c1fa7a8-545e-4944-89ca-5b91a7258b45", 00:25:34.244 "is_configured": true, 00:25:34.244 "data_offset": 0, 00:25:34.244 "data_size": 65536 00:25:34.244 }, 00:25:34.244 { 00:25:34.244 "name": "BaseBdev2", 00:25:34.244 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:34.244 "is_configured": true, 00:25:34.244 "data_offset": 0, 00:25:34.244 "data_size": 65536 00:25:34.244 }, 00:25:34.244 { 00:25:34.244 "name": "BaseBdev3", 00:25:34.244 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:34.244 "is_configured": true, 00:25:34.244 "data_offset": 0, 00:25:34.244 "data_size": 65536 00:25:34.244 } 00:25:34.244 ] 00:25:34.244 }' 00:25:34.244 11:36:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:34.244 11:36:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.810 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:34.810 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:34.810 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:34.810 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:34.810 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:34.810 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:34.810 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:34.810 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:35.069 [2024-07-25 11:36:50.868408] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:35.069 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:35.069 "name": "Existed_Raid", 00:25:35.069 "aliases": [ 00:25:35.069 "817da327-0233-4048-91fb-dc4d49e4b039" 00:25:35.069 ], 00:25:35.069 "product_name": "Raid Volume", 00:25:35.069 "block_size": 512, 00:25:35.069 "num_blocks": 131072, 00:25:35.069 "uuid": "817da327-0233-4048-91fb-dc4d49e4b039", 00:25:35.069 "assigned_rate_limits": { 00:25:35.069 "rw_ios_per_sec": 0, 00:25:35.069 "rw_mbytes_per_sec": 0, 00:25:35.069 "r_mbytes_per_sec": 0, 00:25:35.069 "w_mbytes_per_sec": 0 00:25:35.069 }, 00:25:35.069 "claimed": false, 00:25:35.069 "zoned": false, 00:25:35.069 "supported_io_types": { 00:25:35.069 "read": true, 00:25:35.069 "write": true, 00:25:35.069 "unmap": false, 00:25:35.069 "flush": false, 00:25:35.069 "reset": true, 00:25:35.069 "nvme_admin": false, 00:25:35.069 "nvme_io": false, 00:25:35.069 "nvme_io_md": false, 00:25:35.069 "write_zeroes": true, 00:25:35.069 "zcopy": false, 00:25:35.069 "get_zone_info": false, 00:25:35.069 "zone_management": false, 00:25:35.069 "zone_append": false, 00:25:35.069 "compare": false, 00:25:35.069 "compare_and_write": false, 00:25:35.069 "abort": false, 00:25:35.069 "seek_hole": false, 00:25:35.069 "seek_data": false, 00:25:35.069 "copy": false, 00:25:35.069 "nvme_iov_md": false 00:25:35.069 }, 00:25:35.069 "driver_specific": { 00:25:35.069 "raid": { 00:25:35.069 "uuid": "817da327-0233-4048-91fb-dc4d49e4b039", 00:25:35.069 "strip_size_kb": 64, 00:25:35.069 "state": "online", 00:25:35.069 "raid_level": "raid5f", 00:25:35.069 "superblock": false, 00:25:35.069 "num_base_bdevs": 3, 00:25:35.069 "num_base_bdevs_discovered": 3, 00:25:35.069 "num_base_bdevs_operational": 3, 00:25:35.069 "base_bdevs_list": [ 00:25:35.069 { 00:25:35.069 "name": "NewBaseBdev", 00:25:35.069 "uuid": "6c1fa7a8-545e-4944-89ca-5b91a7258b45", 00:25:35.069 "is_configured": true, 00:25:35.069 "data_offset": 0, 00:25:35.069 "data_size": 65536 00:25:35.069 }, 00:25:35.069 { 00:25:35.069 "name": "BaseBdev2", 00:25:35.069 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:35.069 "is_configured": true, 00:25:35.069 "data_offset": 0, 00:25:35.069 "data_size": 65536 00:25:35.069 }, 00:25:35.069 { 00:25:35.069 "name": "BaseBdev3", 00:25:35.069 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:35.069 "is_configured": true, 00:25:35.069 "data_offset": 0, 00:25:35.069 "data_size": 65536 00:25:35.069 } 00:25:35.069 ] 00:25:35.069 } 00:25:35.069 } 00:25:35.069 }' 00:25:35.069 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:35.069 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:35.069 BaseBdev2 00:25:35.069 BaseBdev3' 00:25:35.069 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:35.069 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:35.069 11:36:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:35.635 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:35.635 "name": "NewBaseBdev", 00:25:35.635 "aliases": [ 00:25:35.635 "6c1fa7a8-545e-4944-89ca-5b91a7258b45" 00:25:35.635 ], 00:25:35.635 "product_name": "Malloc disk", 00:25:35.635 "block_size": 512, 00:25:35.635 "num_blocks": 65536, 00:25:35.635 "uuid": "6c1fa7a8-545e-4944-89ca-5b91a7258b45", 00:25:35.635 "assigned_rate_limits": { 00:25:35.635 "rw_ios_per_sec": 0, 00:25:35.635 "rw_mbytes_per_sec": 0, 00:25:35.635 "r_mbytes_per_sec": 0, 00:25:35.635 "w_mbytes_per_sec": 0 00:25:35.635 }, 00:25:35.635 "claimed": true, 00:25:35.635 "claim_type": "exclusive_write", 00:25:35.635 "zoned": false, 00:25:35.635 "supported_io_types": { 00:25:35.635 "read": true, 00:25:35.635 "write": true, 00:25:35.635 "unmap": true, 00:25:35.635 "flush": true, 00:25:35.635 "reset": true, 00:25:35.635 "nvme_admin": false, 00:25:35.635 "nvme_io": false, 00:25:35.635 "nvme_io_md": false, 00:25:35.635 "write_zeroes": true, 00:25:35.635 "zcopy": true, 00:25:35.635 "get_zone_info": false, 00:25:35.636 "zone_management": false, 00:25:35.636 "zone_append": false, 00:25:35.636 "compare": false, 00:25:35.636 "compare_and_write": false, 00:25:35.636 "abort": true, 00:25:35.636 "seek_hole": false, 00:25:35.636 "seek_data": false, 00:25:35.636 "copy": true, 00:25:35.636 "nvme_iov_md": false 00:25:35.636 }, 00:25:35.636 "memory_domains": [ 00:25:35.636 { 00:25:35.636 "dma_device_id": "system", 00:25:35.636 "dma_device_type": 1 00:25:35.636 }, 00:25:35.636 { 00:25:35.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.636 "dma_device_type": 2 00:25:35.636 } 00:25:35.636 ], 00:25:35.636 "driver_specific": {} 00:25:35.636 }' 00:25:35.636 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:35.636 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:35.636 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:35.636 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:35.636 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:35.636 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:35.636 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.636 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:35.894 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:35.894 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:35.894 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:35.894 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:35.894 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:35.894 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:35.894 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:36.153 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:36.153 "name": "BaseBdev2", 00:25:36.153 "aliases": [ 00:25:36.153 "04c0dfd1-51d1-47c2-8fd9-79c9913772b0" 00:25:36.153 ], 00:25:36.153 "product_name": "Malloc disk", 00:25:36.153 "block_size": 512, 00:25:36.153 "num_blocks": 65536, 00:25:36.153 "uuid": "04c0dfd1-51d1-47c2-8fd9-79c9913772b0", 00:25:36.153 "assigned_rate_limits": { 00:25:36.153 "rw_ios_per_sec": 0, 00:25:36.153 "rw_mbytes_per_sec": 0, 00:25:36.153 "r_mbytes_per_sec": 0, 00:25:36.153 "w_mbytes_per_sec": 0 00:25:36.153 }, 00:25:36.153 "claimed": true, 00:25:36.153 "claim_type": "exclusive_write", 00:25:36.153 "zoned": false, 00:25:36.153 "supported_io_types": { 00:25:36.153 "read": true, 00:25:36.153 "write": true, 00:25:36.153 "unmap": true, 00:25:36.153 "flush": true, 00:25:36.153 "reset": true, 00:25:36.153 "nvme_admin": false, 00:25:36.153 "nvme_io": false, 00:25:36.153 "nvme_io_md": false, 00:25:36.153 "write_zeroes": true, 00:25:36.153 "zcopy": true, 00:25:36.153 "get_zone_info": false, 00:25:36.153 "zone_management": false, 00:25:36.153 "zone_append": false, 00:25:36.153 "compare": false, 00:25:36.153 "compare_and_write": false, 00:25:36.153 "abort": true, 00:25:36.153 "seek_hole": false, 00:25:36.153 "seek_data": false, 00:25:36.153 "copy": true, 00:25:36.153 "nvme_iov_md": false 00:25:36.153 }, 00:25:36.153 "memory_domains": [ 00:25:36.153 { 00:25:36.153 "dma_device_id": "system", 00:25:36.153 "dma_device_type": 1 00:25:36.153 }, 00:25:36.153 { 00:25:36.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.153 "dma_device_type": 2 00:25:36.153 } 00:25:36.153 ], 00:25:36.153 "driver_specific": {} 00:25:36.153 }' 00:25:36.153 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.153 11:36:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.153 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:36.153 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.410 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.410 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:36.410 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:36.410 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:36.411 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:36.411 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:36.411 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:36.668 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:36.668 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:36.668 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:36.668 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:36.927 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:36.927 "name": "BaseBdev3", 00:25:36.927 "aliases": [ 00:25:36.927 "ebd972ab-530c-4184-b47c-596c05efa465" 00:25:36.927 ], 00:25:36.927 "product_name": "Malloc disk", 00:25:36.927 "block_size": 512, 00:25:36.927 "num_blocks": 65536, 00:25:36.927 "uuid": "ebd972ab-530c-4184-b47c-596c05efa465", 00:25:36.927 "assigned_rate_limits": { 00:25:36.927 "rw_ios_per_sec": 0, 00:25:36.927 "rw_mbytes_per_sec": 0, 00:25:36.927 "r_mbytes_per_sec": 0, 00:25:36.927 "w_mbytes_per_sec": 0 00:25:36.927 }, 00:25:36.927 "claimed": true, 00:25:36.927 "claim_type": "exclusive_write", 00:25:36.927 "zoned": false, 00:25:36.927 "supported_io_types": { 00:25:36.927 "read": true, 00:25:36.927 "write": true, 00:25:36.927 "unmap": true, 00:25:36.927 "flush": true, 00:25:36.927 "reset": true, 00:25:36.927 "nvme_admin": false, 00:25:36.927 "nvme_io": false, 00:25:36.927 "nvme_io_md": false, 00:25:36.927 "write_zeroes": true, 00:25:36.927 "zcopy": true, 00:25:36.927 "get_zone_info": false, 00:25:36.927 "zone_management": false, 00:25:36.927 "zone_append": false, 00:25:36.927 "compare": false, 00:25:36.927 "compare_and_write": false, 00:25:36.927 "abort": true, 00:25:36.927 "seek_hole": false, 00:25:36.927 "seek_data": false, 00:25:36.927 "copy": true, 00:25:36.927 "nvme_iov_md": false 00:25:36.927 }, 00:25:36.927 "memory_domains": [ 00:25:36.927 { 00:25:36.927 "dma_device_id": "system", 00:25:36.927 "dma_device_type": 1 00:25:36.927 }, 00:25:36.927 { 00:25:36.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.927 "dma_device_type": 2 00:25:36.927 } 00:25:36.927 ], 00:25:36.927 "driver_specific": {} 00:25:36.927 }' 00:25:36.927 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.927 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:36.927 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:36.927 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.927 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:36.927 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:36.927 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:37.185 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:37.185 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:37.185 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:37.185 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:37.185 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:37.185 11:36:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:37.443 [2024-07-25 11:36:53.232883] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:37.443 [2024-07-25 11:36:53.232940] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.443 [2024-07-25 11:36:53.233067] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.443 [2024-07-25 11:36:53.233471] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.443 [2024-07-25 11:36:53.233489] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 91388 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 91388 ']' 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 91388 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91388 00:25:37.443 killing process with pid 91388 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91388' 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 91388 00:25:37.443 [2024-07-25 11:36:53.276817] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:37.443 11:36:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 91388 00:25:37.702 [2024-07-25 11:36:53.543479] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:39.077 00:25:39.077 real 0m33.076s 00:25:39.077 user 1m0.615s 00:25:39.077 sys 0m4.226s 00:25:39.077 ************************************ 00:25:39.077 END TEST raid5f_state_function_test 00:25:39.077 ************************************ 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.077 11:36:54 bdev_raid -- bdev/bdev_raid.sh@966 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:25:39.077 11:36:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:39.077 11:36:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.077 11:36:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:39.077 ************************************ 00:25:39.077 START TEST raid5f_state_function_test_sb 00:25:39.077 ************************************ 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:39.077 Process raid pid: 92350 00:25:39.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=92350 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 92350' 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 92350 /var/tmp/spdk-raid.sock 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92350 ']' 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.077 11:36:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.336 [2024-07-25 11:36:54.971291] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:25:39.336 [2024-07-25 11:36:54.971770] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.336 [2024-07-25 11:36:55.152128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.595 [2024-07-25 11:36:55.394939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.854 [2024-07-25 11:36:55.611997] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:39.854 [2024-07-25 11:36:55.612034] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:40.111 11:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.111 11:36:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:25:40.111 11:36:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:40.370 [2024-07-25 11:36:56.124537] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:40.370 [2024-07-25 11:36:56.124659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:40.370 [2024-07-25 11:36:56.124681] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:40.370 [2024-07-25 11:36:56.124712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:40.370 [2024-07-25 11:36:56.124726] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:40.370 [2024-07-25 11:36:56.124739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.370 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.629 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.629 "name": "Existed_Raid", 00:25:40.629 "uuid": "7d99cb5d-9e11-42e7-b9c6-83f0f6f86ec6", 00:25:40.629 "strip_size_kb": 64, 00:25:40.629 "state": "configuring", 00:25:40.629 "raid_level": "raid5f", 00:25:40.629 "superblock": true, 00:25:40.629 "num_base_bdevs": 3, 00:25:40.629 "num_base_bdevs_discovered": 0, 00:25:40.629 "num_base_bdevs_operational": 3, 00:25:40.629 "base_bdevs_list": [ 00:25:40.629 { 00:25:40.629 "name": "BaseBdev1", 00:25:40.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.629 "is_configured": false, 00:25:40.629 "data_offset": 0, 00:25:40.629 "data_size": 0 00:25:40.629 }, 00:25:40.629 { 00:25:40.629 "name": "BaseBdev2", 00:25:40.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.629 "is_configured": false, 00:25:40.629 "data_offset": 0, 00:25:40.629 "data_size": 0 00:25:40.629 }, 00:25:40.629 { 00:25:40.629 "name": "BaseBdev3", 00:25:40.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.629 "is_configured": false, 00:25:40.629 "data_offset": 0, 00:25:40.629 "data_size": 0 00:25:40.629 } 00:25:40.629 ] 00:25:40.629 }' 00:25:40.629 11:36:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:40.629 11:36:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.195 11:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:41.761 [2024-07-25 11:36:57.340799] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:41.761 [2024-07-25 11:36:57.340842] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:41.761 11:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:41.761 [2024-07-25 11:36:57.588946] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:41.761 [2024-07-25 11:36:57.589002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:41.761 [2024-07-25 11:36:57.589023] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:41.761 [2024-07-25 11:36:57.589035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:41.761 [2024-07-25 11:36:57.589046] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:41.761 [2024-07-25 11:36:57.589057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:41.761 11:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:42.018 [2024-07-25 11:36:57.869629] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:42.018 BaseBdev1 00:25:42.018 11:36:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:42.018 11:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:42.018 11:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:42.018 11:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:42.018 11:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:42.018 11:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:42.018 11:36:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:42.276 11:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:42.535 [ 00:25:42.535 { 00:25:42.535 "name": "BaseBdev1", 00:25:42.535 "aliases": [ 00:25:42.535 "47a517bd-b7ff-46bd-8a1c-101369e5f095" 00:25:42.535 ], 00:25:42.535 "product_name": "Malloc disk", 00:25:42.535 "block_size": 512, 00:25:42.535 "num_blocks": 65536, 00:25:42.535 "uuid": "47a517bd-b7ff-46bd-8a1c-101369e5f095", 00:25:42.535 "assigned_rate_limits": { 00:25:42.535 "rw_ios_per_sec": 0, 00:25:42.535 "rw_mbytes_per_sec": 0, 00:25:42.535 "r_mbytes_per_sec": 0, 00:25:42.535 "w_mbytes_per_sec": 0 00:25:42.535 }, 00:25:42.535 "claimed": true, 00:25:42.535 "claim_type": "exclusive_write", 00:25:42.535 "zoned": false, 00:25:42.535 "supported_io_types": { 00:25:42.535 "read": true, 00:25:42.535 "write": true, 00:25:42.535 "unmap": true, 00:25:42.535 "flush": true, 00:25:42.535 "reset": true, 00:25:42.535 "nvme_admin": false, 00:25:42.535 "nvme_io": false, 00:25:42.535 "nvme_io_md": false, 00:25:42.535 "write_zeroes": true, 00:25:42.535 "zcopy": true, 00:25:42.535 "get_zone_info": false, 00:25:42.535 "zone_management": false, 00:25:42.535 "zone_append": false, 00:25:42.535 "compare": false, 00:25:42.535 "compare_and_write": false, 00:25:42.535 "abort": true, 00:25:42.535 "seek_hole": false, 00:25:42.535 "seek_data": false, 00:25:42.535 "copy": true, 00:25:42.535 "nvme_iov_md": false 00:25:42.535 }, 00:25:42.535 "memory_domains": [ 00:25:42.535 { 00:25:42.535 "dma_device_id": "system", 00:25:42.535 "dma_device_type": 1 00:25:42.535 }, 00:25:42.535 { 00:25:42.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.535 "dma_device_type": 2 00:25:42.535 } 00:25:42.535 ], 00:25:42.535 "driver_specific": {} 00:25:42.535 } 00:25:42.535 ] 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.535 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.102 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:43.102 "name": "Existed_Raid", 00:25:43.102 "uuid": "6d821343-06e7-48a1-8243-a7dc1c7b72d1", 00:25:43.102 "strip_size_kb": 64, 00:25:43.102 "state": "configuring", 00:25:43.102 "raid_level": "raid5f", 00:25:43.102 "superblock": true, 00:25:43.102 "num_base_bdevs": 3, 00:25:43.102 "num_base_bdevs_discovered": 1, 00:25:43.102 "num_base_bdevs_operational": 3, 00:25:43.102 "base_bdevs_list": [ 00:25:43.102 { 00:25:43.102 "name": "BaseBdev1", 00:25:43.102 "uuid": "47a517bd-b7ff-46bd-8a1c-101369e5f095", 00:25:43.102 "is_configured": true, 00:25:43.102 "data_offset": 2048, 00:25:43.102 "data_size": 63488 00:25:43.102 }, 00:25:43.102 { 00:25:43.102 "name": "BaseBdev2", 00:25:43.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.102 "is_configured": false, 00:25:43.102 "data_offset": 0, 00:25:43.102 "data_size": 0 00:25:43.102 }, 00:25:43.102 { 00:25:43.102 "name": "BaseBdev3", 00:25:43.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.102 "is_configured": false, 00:25:43.102 "data_offset": 0, 00:25:43.102 "data_size": 0 00:25:43.102 } 00:25:43.102 ] 00:25:43.102 }' 00:25:43.102 11:36:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:43.102 11:36:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:43.669 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:43.669 [2024-07-25 11:36:59.542326] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:43.669 [2024-07-25 11:36:59.542426] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:43.929 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:44.188 [2024-07-25 11:36:59.858490] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:44.188 [2024-07-25 11:36:59.860986] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:44.188 [2024-07-25 11:36:59.861037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:44.188 [2024-07-25 11:36:59.861073] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:44.188 [2024-07-25 11:36:59.861087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.188 11:36:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:44.446 11:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:44.446 "name": "Existed_Raid", 00:25:44.446 "uuid": "292a2e5a-4587-444a-b4bc-03c0673195eb", 00:25:44.446 "strip_size_kb": 64, 00:25:44.446 "state": "configuring", 00:25:44.446 "raid_level": "raid5f", 00:25:44.446 "superblock": true, 00:25:44.446 "num_base_bdevs": 3, 00:25:44.446 "num_base_bdevs_discovered": 1, 00:25:44.446 "num_base_bdevs_operational": 3, 00:25:44.446 "base_bdevs_list": [ 00:25:44.446 { 00:25:44.446 "name": "BaseBdev1", 00:25:44.446 "uuid": "47a517bd-b7ff-46bd-8a1c-101369e5f095", 00:25:44.446 "is_configured": true, 00:25:44.446 "data_offset": 2048, 00:25:44.446 "data_size": 63488 00:25:44.446 }, 00:25:44.446 { 00:25:44.446 "name": "BaseBdev2", 00:25:44.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.446 "is_configured": false, 00:25:44.446 "data_offset": 0, 00:25:44.446 "data_size": 0 00:25:44.446 }, 00:25:44.446 { 00:25:44.446 "name": "BaseBdev3", 00:25:44.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.446 "is_configured": false, 00:25:44.446 "data_offset": 0, 00:25:44.446 "data_size": 0 00:25:44.446 } 00:25:44.446 ] 00:25:44.446 }' 00:25:44.446 11:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:44.446 11:37:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.013 11:37:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:45.271 [2024-07-25 11:37:01.096183] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:45.271 BaseBdev2 00:25:45.271 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:45.271 11:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:45.271 11:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:45.271 11:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:45.271 11:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:45.271 11:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:45.271 11:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:45.529 11:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:45.788 [ 00:25:45.788 { 00:25:45.788 "name": "BaseBdev2", 00:25:45.788 "aliases": [ 00:25:45.788 "a9a97c89-3d5d-49e7-bf28-254c73f3d222" 00:25:45.788 ], 00:25:45.788 "product_name": "Malloc disk", 00:25:45.788 "block_size": 512, 00:25:45.788 "num_blocks": 65536, 00:25:45.788 "uuid": "a9a97c89-3d5d-49e7-bf28-254c73f3d222", 00:25:45.788 "assigned_rate_limits": { 00:25:45.788 "rw_ios_per_sec": 0, 00:25:45.788 "rw_mbytes_per_sec": 0, 00:25:45.788 "r_mbytes_per_sec": 0, 00:25:45.788 "w_mbytes_per_sec": 0 00:25:45.788 }, 00:25:45.788 "claimed": true, 00:25:45.788 "claim_type": "exclusive_write", 00:25:45.788 "zoned": false, 00:25:45.788 "supported_io_types": { 00:25:45.788 "read": true, 00:25:45.788 "write": true, 00:25:45.788 "unmap": true, 00:25:45.788 "flush": true, 00:25:45.789 "reset": true, 00:25:45.789 "nvme_admin": false, 00:25:45.789 "nvme_io": false, 00:25:45.789 "nvme_io_md": false, 00:25:45.789 "write_zeroes": true, 00:25:45.789 "zcopy": true, 00:25:45.789 "get_zone_info": false, 00:25:45.789 "zone_management": false, 00:25:45.789 "zone_append": false, 00:25:45.789 "compare": false, 00:25:45.789 "compare_and_write": false, 00:25:45.789 "abort": true, 00:25:45.789 "seek_hole": false, 00:25:45.789 "seek_data": false, 00:25:45.789 "copy": true, 00:25:45.789 "nvme_iov_md": false 00:25:45.789 }, 00:25:45.789 "memory_domains": [ 00:25:45.789 { 00:25:45.789 "dma_device_id": "system", 00:25:45.789 "dma_device_type": 1 00:25:45.789 }, 00:25:45.789 { 00:25:45.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.789 "dma_device_type": 2 00:25:45.789 } 00:25:45.789 ], 00:25:45.789 "driver_specific": {} 00:25:45.789 } 00:25:45.789 ] 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.789 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.048 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.048 "name": "Existed_Raid", 00:25:46.048 "uuid": "292a2e5a-4587-444a-b4bc-03c0673195eb", 00:25:46.048 "strip_size_kb": 64, 00:25:46.048 "state": "configuring", 00:25:46.048 "raid_level": "raid5f", 00:25:46.048 "superblock": true, 00:25:46.048 "num_base_bdevs": 3, 00:25:46.048 "num_base_bdevs_discovered": 2, 00:25:46.048 "num_base_bdevs_operational": 3, 00:25:46.048 "base_bdevs_list": [ 00:25:46.048 { 00:25:46.048 "name": "BaseBdev1", 00:25:46.048 "uuid": "47a517bd-b7ff-46bd-8a1c-101369e5f095", 00:25:46.048 "is_configured": true, 00:25:46.048 "data_offset": 2048, 00:25:46.048 "data_size": 63488 00:25:46.048 }, 00:25:46.048 { 00:25:46.048 "name": "BaseBdev2", 00:25:46.048 "uuid": "a9a97c89-3d5d-49e7-bf28-254c73f3d222", 00:25:46.048 "is_configured": true, 00:25:46.048 "data_offset": 2048, 00:25:46.048 "data_size": 63488 00:25:46.048 }, 00:25:46.048 { 00:25:46.048 "name": "BaseBdev3", 00:25:46.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.048 "is_configured": false, 00:25:46.048 "data_offset": 0, 00:25:46.048 "data_size": 0 00:25:46.048 } 00:25:46.048 ] 00:25:46.048 }' 00:25:46.048 11:37:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.048 11:37:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.983 11:37:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:47.241 [2024-07-25 11:37:02.873173] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:47.241 [2024-07-25 11:37:02.873828] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:47.241 [2024-07-25 11:37:02.873988] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:47.241 [2024-07-25 11:37:02.874430] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:47.241 BaseBdev3 00:25:47.241 [2024-07-25 11:37:02.880129] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:47.241 [2024-07-25 11:37:02.880295] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:47.241 [2024-07-25 11:37:02.880651] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.241 11:37:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:47.241 11:37:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:47.241 11:37:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:47.241 11:37:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:47.241 11:37:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:47.241 11:37:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:47.241 11:37:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:47.500 11:37:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:47.500 [ 00:25:47.500 { 00:25:47.500 "name": "BaseBdev3", 00:25:47.500 "aliases": [ 00:25:47.500 "4dd4a11a-81bd-4192-9cc6-c5b192a5f9f2" 00:25:47.500 ], 00:25:47.500 "product_name": "Malloc disk", 00:25:47.500 "block_size": 512, 00:25:47.500 "num_blocks": 65536, 00:25:47.500 "uuid": "4dd4a11a-81bd-4192-9cc6-c5b192a5f9f2", 00:25:47.500 "assigned_rate_limits": { 00:25:47.500 "rw_ios_per_sec": 0, 00:25:47.500 "rw_mbytes_per_sec": 0, 00:25:47.500 "r_mbytes_per_sec": 0, 00:25:47.500 "w_mbytes_per_sec": 0 00:25:47.500 }, 00:25:47.500 "claimed": true, 00:25:47.500 "claim_type": "exclusive_write", 00:25:47.500 "zoned": false, 00:25:47.500 "supported_io_types": { 00:25:47.500 "read": true, 00:25:47.500 "write": true, 00:25:47.500 "unmap": true, 00:25:47.500 "flush": true, 00:25:47.500 "reset": true, 00:25:47.500 "nvme_admin": false, 00:25:47.500 "nvme_io": false, 00:25:47.500 "nvme_io_md": false, 00:25:47.500 "write_zeroes": true, 00:25:47.500 "zcopy": true, 00:25:47.500 "get_zone_info": false, 00:25:47.500 "zone_management": false, 00:25:47.500 "zone_append": false, 00:25:47.500 "compare": false, 00:25:47.500 "compare_and_write": false, 00:25:47.500 "abort": true, 00:25:47.500 "seek_hole": false, 00:25:47.500 "seek_data": false, 00:25:47.500 "copy": true, 00:25:47.500 "nvme_iov_md": false 00:25:47.500 }, 00:25:47.500 "memory_domains": [ 00:25:47.500 { 00:25:47.500 "dma_device_id": "system", 00:25:47.500 "dma_device_type": 1 00:25:47.500 }, 00:25:47.500 { 00:25:47.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.500 "dma_device_type": 2 00:25:47.500 } 00:25:47.500 ], 00:25:47.500 "driver_specific": {} 00:25:47.500 } 00:25:47.500 ] 00:25:47.500 11:37:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:47.500 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:47.501 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:47.759 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:47.759 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.019 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:48.019 "name": "Existed_Raid", 00:25:48.019 "uuid": "292a2e5a-4587-444a-b4bc-03c0673195eb", 00:25:48.019 "strip_size_kb": 64, 00:25:48.019 "state": "online", 00:25:48.019 "raid_level": "raid5f", 00:25:48.019 "superblock": true, 00:25:48.019 "num_base_bdevs": 3, 00:25:48.019 "num_base_bdevs_discovered": 3, 00:25:48.019 "num_base_bdevs_operational": 3, 00:25:48.019 "base_bdevs_list": [ 00:25:48.019 { 00:25:48.019 "name": "BaseBdev1", 00:25:48.019 "uuid": "47a517bd-b7ff-46bd-8a1c-101369e5f095", 00:25:48.019 "is_configured": true, 00:25:48.019 "data_offset": 2048, 00:25:48.019 "data_size": 63488 00:25:48.019 }, 00:25:48.019 { 00:25:48.019 "name": "BaseBdev2", 00:25:48.019 "uuid": "a9a97c89-3d5d-49e7-bf28-254c73f3d222", 00:25:48.019 "is_configured": true, 00:25:48.019 "data_offset": 2048, 00:25:48.019 "data_size": 63488 00:25:48.019 }, 00:25:48.019 { 00:25:48.019 "name": "BaseBdev3", 00:25:48.019 "uuid": "4dd4a11a-81bd-4192-9cc6-c5b192a5f9f2", 00:25:48.019 "is_configured": true, 00:25:48.019 "data_offset": 2048, 00:25:48.019 "data_size": 63488 00:25:48.019 } 00:25:48.019 ] 00:25:48.019 }' 00:25:48.019 11:37:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:48.019 11:37:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.586 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:48.586 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:48.586 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:48.586 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:48.586 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:48.586 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:48.586 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:48.586 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:48.845 [2024-07-25 11:37:04.475221] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:48.845 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:48.845 "name": "Existed_Raid", 00:25:48.845 "aliases": [ 00:25:48.845 "292a2e5a-4587-444a-b4bc-03c0673195eb" 00:25:48.845 ], 00:25:48.845 "product_name": "Raid Volume", 00:25:48.845 "block_size": 512, 00:25:48.845 "num_blocks": 126976, 00:25:48.845 "uuid": "292a2e5a-4587-444a-b4bc-03c0673195eb", 00:25:48.845 "assigned_rate_limits": { 00:25:48.845 "rw_ios_per_sec": 0, 00:25:48.845 "rw_mbytes_per_sec": 0, 00:25:48.845 "r_mbytes_per_sec": 0, 00:25:48.845 "w_mbytes_per_sec": 0 00:25:48.845 }, 00:25:48.845 "claimed": false, 00:25:48.845 "zoned": false, 00:25:48.845 "supported_io_types": { 00:25:48.845 "read": true, 00:25:48.845 "write": true, 00:25:48.845 "unmap": false, 00:25:48.845 "flush": false, 00:25:48.845 "reset": true, 00:25:48.845 "nvme_admin": false, 00:25:48.845 "nvme_io": false, 00:25:48.845 "nvme_io_md": false, 00:25:48.845 "write_zeroes": true, 00:25:48.845 "zcopy": false, 00:25:48.845 "get_zone_info": false, 00:25:48.845 "zone_management": false, 00:25:48.845 "zone_append": false, 00:25:48.845 "compare": false, 00:25:48.845 "compare_and_write": false, 00:25:48.845 "abort": false, 00:25:48.845 "seek_hole": false, 00:25:48.845 "seek_data": false, 00:25:48.845 "copy": false, 00:25:48.845 "nvme_iov_md": false 00:25:48.845 }, 00:25:48.845 "driver_specific": { 00:25:48.845 "raid": { 00:25:48.845 "uuid": "292a2e5a-4587-444a-b4bc-03c0673195eb", 00:25:48.845 "strip_size_kb": 64, 00:25:48.845 "state": "online", 00:25:48.845 "raid_level": "raid5f", 00:25:48.845 "superblock": true, 00:25:48.845 "num_base_bdevs": 3, 00:25:48.845 "num_base_bdevs_discovered": 3, 00:25:48.845 "num_base_bdevs_operational": 3, 00:25:48.845 "base_bdevs_list": [ 00:25:48.845 { 00:25:48.845 "name": "BaseBdev1", 00:25:48.845 "uuid": "47a517bd-b7ff-46bd-8a1c-101369e5f095", 00:25:48.845 "is_configured": true, 00:25:48.845 "data_offset": 2048, 00:25:48.845 "data_size": 63488 00:25:48.845 }, 00:25:48.845 { 00:25:48.845 "name": "BaseBdev2", 00:25:48.845 "uuid": "a9a97c89-3d5d-49e7-bf28-254c73f3d222", 00:25:48.845 "is_configured": true, 00:25:48.845 "data_offset": 2048, 00:25:48.845 "data_size": 63488 00:25:48.845 }, 00:25:48.845 { 00:25:48.845 "name": "BaseBdev3", 00:25:48.845 "uuid": "4dd4a11a-81bd-4192-9cc6-c5b192a5f9f2", 00:25:48.845 "is_configured": true, 00:25:48.845 "data_offset": 2048, 00:25:48.845 "data_size": 63488 00:25:48.845 } 00:25:48.845 ] 00:25:48.845 } 00:25:48.845 } 00:25:48.845 }' 00:25:48.845 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:48.845 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:48.845 BaseBdev2 00:25:48.845 BaseBdev3' 00:25:48.845 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:48.845 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:48.845 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:49.103 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:49.103 "name": "BaseBdev1", 00:25:49.103 "aliases": [ 00:25:49.103 "47a517bd-b7ff-46bd-8a1c-101369e5f095" 00:25:49.103 ], 00:25:49.103 "product_name": "Malloc disk", 00:25:49.103 "block_size": 512, 00:25:49.103 "num_blocks": 65536, 00:25:49.103 "uuid": "47a517bd-b7ff-46bd-8a1c-101369e5f095", 00:25:49.103 "assigned_rate_limits": { 00:25:49.103 "rw_ios_per_sec": 0, 00:25:49.103 "rw_mbytes_per_sec": 0, 00:25:49.103 "r_mbytes_per_sec": 0, 00:25:49.103 "w_mbytes_per_sec": 0 00:25:49.103 }, 00:25:49.103 "claimed": true, 00:25:49.103 "claim_type": "exclusive_write", 00:25:49.103 "zoned": false, 00:25:49.103 "supported_io_types": { 00:25:49.103 "read": true, 00:25:49.103 "write": true, 00:25:49.103 "unmap": true, 00:25:49.103 "flush": true, 00:25:49.103 "reset": true, 00:25:49.103 "nvme_admin": false, 00:25:49.103 "nvme_io": false, 00:25:49.103 "nvme_io_md": false, 00:25:49.103 "write_zeroes": true, 00:25:49.103 "zcopy": true, 00:25:49.103 "get_zone_info": false, 00:25:49.103 "zone_management": false, 00:25:49.103 "zone_append": false, 00:25:49.103 "compare": false, 00:25:49.103 "compare_and_write": false, 00:25:49.103 "abort": true, 00:25:49.103 "seek_hole": false, 00:25:49.103 "seek_data": false, 00:25:49.103 "copy": true, 00:25:49.103 "nvme_iov_md": false 00:25:49.103 }, 00:25:49.103 "memory_domains": [ 00:25:49.103 { 00:25:49.103 "dma_device_id": "system", 00:25:49.103 "dma_device_type": 1 00:25:49.103 }, 00:25:49.103 { 00:25:49.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.103 "dma_device_type": 2 00:25:49.103 } 00:25:49.103 ], 00:25:49.103 "driver_specific": {} 00:25:49.103 }' 00:25:49.103 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.103 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.103 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:49.103 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.103 11:37:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.362 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:49.362 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.362 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.362 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:49.362 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.362 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.362 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:49.362 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.621 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:49.621 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:49.621 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:49.621 "name": "BaseBdev2", 00:25:49.621 "aliases": [ 00:25:49.621 "a9a97c89-3d5d-49e7-bf28-254c73f3d222" 00:25:49.621 ], 00:25:49.621 "product_name": "Malloc disk", 00:25:49.621 "block_size": 512, 00:25:49.621 "num_blocks": 65536, 00:25:49.621 "uuid": "a9a97c89-3d5d-49e7-bf28-254c73f3d222", 00:25:49.621 "assigned_rate_limits": { 00:25:49.621 "rw_ios_per_sec": 0, 00:25:49.621 "rw_mbytes_per_sec": 0, 00:25:49.621 "r_mbytes_per_sec": 0, 00:25:49.621 "w_mbytes_per_sec": 0 00:25:49.621 }, 00:25:49.621 "claimed": true, 00:25:49.621 "claim_type": "exclusive_write", 00:25:49.621 "zoned": false, 00:25:49.621 "supported_io_types": { 00:25:49.621 "read": true, 00:25:49.621 "write": true, 00:25:49.621 "unmap": true, 00:25:49.621 "flush": true, 00:25:49.621 "reset": true, 00:25:49.621 "nvme_admin": false, 00:25:49.621 "nvme_io": false, 00:25:49.621 "nvme_io_md": false, 00:25:49.621 "write_zeroes": true, 00:25:49.621 "zcopy": true, 00:25:49.621 "get_zone_info": false, 00:25:49.621 "zone_management": false, 00:25:49.621 "zone_append": false, 00:25:49.621 "compare": false, 00:25:49.621 "compare_and_write": false, 00:25:49.621 "abort": true, 00:25:49.621 "seek_hole": false, 00:25:49.621 "seek_data": false, 00:25:49.621 "copy": true, 00:25:49.621 "nvme_iov_md": false 00:25:49.621 }, 00:25:49.621 "memory_domains": [ 00:25:49.621 { 00:25:49.621 "dma_device_id": "system", 00:25:49.621 "dma_device_type": 1 00:25:49.621 }, 00:25:49.621 { 00:25:49.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.621 "dma_device_type": 2 00:25:49.621 } 00:25:49.621 ], 00:25:49.621 "driver_specific": {} 00:25:49.621 }' 00:25:49.621 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.879 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.879 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:49.879 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.879 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.879 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:49.879 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.879 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.138 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:50.138 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.138 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.138 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:50.138 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:50.138 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:50.138 11:37:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.396 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.396 "name": "BaseBdev3", 00:25:50.396 "aliases": [ 00:25:50.396 "4dd4a11a-81bd-4192-9cc6-c5b192a5f9f2" 00:25:50.396 ], 00:25:50.396 "product_name": "Malloc disk", 00:25:50.396 "block_size": 512, 00:25:50.396 "num_blocks": 65536, 00:25:50.396 "uuid": "4dd4a11a-81bd-4192-9cc6-c5b192a5f9f2", 00:25:50.396 "assigned_rate_limits": { 00:25:50.396 "rw_ios_per_sec": 0, 00:25:50.396 "rw_mbytes_per_sec": 0, 00:25:50.396 "r_mbytes_per_sec": 0, 00:25:50.396 "w_mbytes_per_sec": 0 00:25:50.396 }, 00:25:50.396 "claimed": true, 00:25:50.396 "claim_type": "exclusive_write", 00:25:50.396 "zoned": false, 00:25:50.396 "supported_io_types": { 00:25:50.396 "read": true, 00:25:50.396 "write": true, 00:25:50.396 "unmap": true, 00:25:50.396 "flush": true, 00:25:50.396 "reset": true, 00:25:50.396 "nvme_admin": false, 00:25:50.396 "nvme_io": false, 00:25:50.396 "nvme_io_md": false, 00:25:50.396 "write_zeroes": true, 00:25:50.396 "zcopy": true, 00:25:50.396 "get_zone_info": false, 00:25:50.396 "zone_management": false, 00:25:50.396 "zone_append": false, 00:25:50.396 "compare": false, 00:25:50.396 "compare_and_write": false, 00:25:50.396 "abort": true, 00:25:50.396 "seek_hole": false, 00:25:50.396 "seek_data": false, 00:25:50.396 "copy": true, 00:25:50.396 "nvme_iov_md": false 00:25:50.396 }, 00:25:50.396 "memory_domains": [ 00:25:50.396 { 00:25:50.396 "dma_device_id": "system", 00:25:50.396 "dma_device_type": 1 00:25:50.396 }, 00:25:50.396 { 00:25:50.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.396 "dma_device_type": 2 00:25:50.396 } 00:25:50.396 ], 00:25:50.396 "driver_specific": {} 00:25:50.396 }' 00:25:50.396 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.396 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.654 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.654 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.654 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.654 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:50.654 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.654 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.654 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:50.654 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.654 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.912 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:50.912 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:51.171 [2024-07-25 11:37:06.807878] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.171 11:37:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.430 11:37:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:51.430 "name": "Existed_Raid", 00:25:51.430 "uuid": "292a2e5a-4587-444a-b4bc-03c0673195eb", 00:25:51.430 "strip_size_kb": 64, 00:25:51.430 "state": "online", 00:25:51.430 "raid_level": "raid5f", 00:25:51.430 "superblock": true, 00:25:51.430 "num_base_bdevs": 3, 00:25:51.430 "num_base_bdevs_discovered": 2, 00:25:51.430 "num_base_bdevs_operational": 2, 00:25:51.430 "base_bdevs_list": [ 00:25:51.430 { 00:25:51.430 "name": null, 00:25:51.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.430 "is_configured": false, 00:25:51.430 "data_offset": 2048, 00:25:51.430 "data_size": 63488 00:25:51.430 }, 00:25:51.430 { 00:25:51.430 "name": "BaseBdev2", 00:25:51.430 "uuid": "a9a97c89-3d5d-49e7-bf28-254c73f3d222", 00:25:51.430 "is_configured": true, 00:25:51.430 "data_offset": 2048, 00:25:51.430 "data_size": 63488 00:25:51.430 }, 00:25:51.430 { 00:25:51.430 "name": "BaseBdev3", 00:25:51.430 "uuid": "4dd4a11a-81bd-4192-9cc6-c5b192a5f9f2", 00:25:51.430 "is_configured": true, 00:25:51.430 "data_offset": 2048, 00:25:51.430 "data_size": 63488 00:25:51.430 } 00:25:51.430 ] 00:25:51.430 }' 00:25:51.430 11:37:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:51.430 11:37:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.366 11:37:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:52.366 11:37:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:52.366 11:37:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.366 11:37:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:52.366 11:37:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:52.366 11:37:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:52.366 11:37:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:52.625 [2024-07-25 11:37:08.398304] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:52.625 [2024-07-25 11:37:08.398490] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:52.625 [2024-07-25 11:37:08.471981] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:52.625 11:37:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:52.625 11:37:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:52.625 11:37:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.625 11:37:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:53.191 11:37:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:53.191 11:37:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:53.191 11:37:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:53.191 [2024-07-25 11:37:08.984491] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:53.191 [2024-07-25 11:37:08.984593] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:53.450 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:53.450 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:53.450 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.450 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:53.450 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:53.450 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:53.450 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:25:53.450 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:53.450 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:53.450 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:54.016 BaseBdev2 00:25:54.017 11:37:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:54.017 11:37:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:54.017 11:37:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:54.017 11:37:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:54.017 11:37:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:54.017 11:37:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:54.017 11:37:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:54.017 11:37:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:54.274 [ 00:25:54.274 { 00:25:54.274 "name": "BaseBdev2", 00:25:54.274 "aliases": [ 00:25:54.274 "98f5f66e-773a-468d-ad61-10524c74ba52" 00:25:54.274 ], 00:25:54.274 "product_name": "Malloc disk", 00:25:54.274 "block_size": 512, 00:25:54.274 "num_blocks": 65536, 00:25:54.274 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:25:54.274 "assigned_rate_limits": { 00:25:54.274 "rw_ios_per_sec": 0, 00:25:54.274 "rw_mbytes_per_sec": 0, 00:25:54.274 "r_mbytes_per_sec": 0, 00:25:54.275 "w_mbytes_per_sec": 0 00:25:54.275 }, 00:25:54.275 "claimed": false, 00:25:54.275 "zoned": false, 00:25:54.275 "supported_io_types": { 00:25:54.275 "read": true, 00:25:54.275 "write": true, 00:25:54.275 "unmap": true, 00:25:54.275 "flush": true, 00:25:54.275 "reset": true, 00:25:54.275 "nvme_admin": false, 00:25:54.275 "nvme_io": false, 00:25:54.275 "nvme_io_md": false, 00:25:54.275 "write_zeroes": true, 00:25:54.275 "zcopy": true, 00:25:54.275 "get_zone_info": false, 00:25:54.275 "zone_management": false, 00:25:54.275 "zone_append": false, 00:25:54.275 "compare": false, 00:25:54.275 "compare_and_write": false, 00:25:54.275 "abort": true, 00:25:54.275 "seek_hole": false, 00:25:54.275 "seek_data": false, 00:25:54.275 "copy": true, 00:25:54.275 "nvme_iov_md": false 00:25:54.275 }, 00:25:54.275 "memory_domains": [ 00:25:54.275 { 00:25:54.275 "dma_device_id": "system", 00:25:54.275 "dma_device_type": 1 00:25:54.275 }, 00:25:54.275 { 00:25:54.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.275 "dma_device_type": 2 00:25:54.275 } 00:25:54.275 ], 00:25:54.275 "driver_specific": {} 00:25:54.275 } 00:25:54.275 ] 00:25:54.275 11:37:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:54.275 11:37:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:54.275 11:37:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:54.275 11:37:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:54.532 BaseBdev3 00:25:54.791 11:37:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:54.791 11:37:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:54.791 11:37:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:54.791 11:37:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:54.791 11:37:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:54.791 11:37:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:54.791 11:37:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:55.049 11:37:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:55.049 [ 00:25:55.049 { 00:25:55.049 "name": "BaseBdev3", 00:25:55.049 "aliases": [ 00:25:55.049 "1ec13ecc-9361-460b-b5c3-032120c8a5e1" 00:25:55.049 ], 00:25:55.049 "product_name": "Malloc disk", 00:25:55.049 "block_size": 512, 00:25:55.049 "num_blocks": 65536, 00:25:55.049 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:25:55.049 "assigned_rate_limits": { 00:25:55.049 "rw_ios_per_sec": 0, 00:25:55.049 "rw_mbytes_per_sec": 0, 00:25:55.049 "r_mbytes_per_sec": 0, 00:25:55.049 "w_mbytes_per_sec": 0 00:25:55.049 }, 00:25:55.049 "claimed": false, 00:25:55.049 "zoned": false, 00:25:55.049 "supported_io_types": { 00:25:55.049 "read": true, 00:25:55.049 "write": true, 00:25:55.049 "unmap": true, 00:25:55.049 "flush": true, 00:25:55.049 "reset": true, 00:25:55.049 "nvme_admin": false, 00:25:55.049 "nvme_io": false, 00:25:55.049 "nvme_io_md": false, 00:25:55.049 "write_zeroes": true, 00:25:55.049 "zcopy": true, 00:25:55.049 "get_zone_info": false, 00:25:55.049 "zone_management": false, 00:25:55.049 "zone_append": false, 00:25:55.049 "compare": false, 00:25:55.050 "compare_and_write": false, 00:25:55.050 "abort": true, 00:25:55.050 "seek_hole": false, 00:25:55.050 "seek_data": false, 00:25:55.050 "copy": true, 00:25:55.050 "nvme_iov_md": false 00:25:55.050 }, 00:25:55.050 "memory_domains": [ 00:25:55.050 { 00:25:55.050 "dma_device_id": "system", 00:25:55.050 "dma_device_type": 1 00:25:55.050 }, 00:25:55.050 { 00:25:55.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.050 "dma_device_type": 2 00:25:55.050 } 00:25:55.050 ], 00:25:55.050 "driver_specific": {} 00:25:55.050 } 00:25:55.050 ] 00:25:55.050 11:37:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:55.050 11:37:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:55.050 11:37:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:55.050 11:37:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:25:55.308 [2024-07-25 11:37:11.153619] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:55.308 [2024-07-25 11:37:11.153712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:55.308 [2024-07-25 11:37:11.153783] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:55.308 [2024-07-25 11:37:11.156286] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:55.308 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:55.308 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:55.308 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:55.308 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:55.308 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:55.308 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:55.308 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.308 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.308 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.308 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.309 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.309 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.567 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.567 "name": "Existed_Raid", 00:25:55.567 "uuid": "1c9c23b6-950c-4812-9589-02c053f488f7", 00:25:55.567 "strip_size_kb": 64, 00:25:55.567 "state": "configuring", 00:25:55.567 "raid_level": "raid5f", 00:25:55.567 "superblock": true, 00:25:55.567 "num_base_bdevs": 3, 00:25:55.567 "num_base_bdevs_discovered": 2, 00:25:55.567 "num_base_bdevs_operational": 3, 00:25:55.567 "base_bdevs_list": [ 00:25:55.567 { 00:25:55.567 "name": "BaseBdev1", 00:25:55.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.567 "is_configured": false, 00:25:55.567 "data_offset": 0, 00:25:55.567 "data_size": 0 00:25:55.567 }, 00:25:55.567 { 00:25:55.567 "name": "BaseBdev2", 00:25:55.567 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:25:55.567 "is_configured": true, 00:25:55.567 "data_offset": 2048, 00:25:55.567 "data_size": 63488 00:25:55.567 }, 00:25:55.567 { 00:25:55.567 "name": "BaseBdev3", 00:25:55.567 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:25:55.567 "is_configured": true, 00:25:55.567 "data_offset": 2048, 00:25:55.567 "data_size": 63488 00:25:55.567 } 00:25:55.567 ] 00:25:55.567 }' 00:25:55.567 11:37:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.567 11:37:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.500 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:56.758 [2024-07-25 11:37:12.398054] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:56.758 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:56.758 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:56.759 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:56.759 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:56.759 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:56.759 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:56.759 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.759 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.759 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.759 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.759 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.759 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.049 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.049 "name": "Existed_Raid", 00:25:57.049 "uuid": "1c9c23b6-950c-4812-9589-02c053f488f7", 00:25:57.049 "strip_size_kb": 64, 00:25:57.049 "state": "configuring", 00:25:57.049 "raid_level": "raid5f", 00:25:57.049 "superblock": true, 00:25:57.049 "num_base_bdevs": 3, 00:25:57.049 "num_base_bdevs_discovered": 1, 00:25:57.049 "num_base_bdevs_operational": 3, 00:25:57.049 "base_bdevs_list": [ 00:25:57.049 { 00:25:57.049 "name": "BaseBdev1", 00:25:57.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.049 "is_configured": false, 00:25:57.049 "data_offset": 0, 00:25:57.049 "data_size": 0 00:25:57.049 }, 00:25:57.049 { 00:25:57.049 "name": null, 00:25:57.049 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:25:57.049 "is_configured": false, 00:25:57.049 "data_offset": 2048, 00:25:57.049 "data_size": 63488 00:25:57.049 }, 00:25:57.049 { 00:25:57.049 "name": "BaseBdev3", 00:25:57.049 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:25:57.049 "is_configured": true, 00:25:57.049 "data_offset": 2048, 00:25:57.049 "data_size": 63488 00:25:57.049 } 00:25:57.049 ] 00:25:57.049 }' 00:25:57.049 11:37:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.049 11:37:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.634 11:37:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.634 11:37:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:57.892 11:37:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:57.892 11:37:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:58.149 [2024-07-25 11:37:13.977335] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:58.149 BaseBdev1 00:25:58.149 11:37:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:58.149 11:37:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:58.149 11:37:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:58.149 11:37:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:58.149 11:37:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:58.149 11:37:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:58.149 11:37:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:58.407 11:37:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:58.665 [ 00:25:58.665 { 00:25:58.665 "name": "BaseBdev1", 00:25:58.665 "aliases": [ 00:25:58.665 "52d9269d-4b82-41f8-bb6a-592da34fc5ba" 00:25:58.665 ], 00:25:58.665 "product_name": "Malloc disk", 00:25:58.665 "block_size": 512, 00:25:58.665 "num_blocks": 65536, 00:25:58.665 "uuid": "52d9269d-4b82-41f8-bb6a-592da34fc5ba", 00:25:58.665 "assigned_rate_limits": { 00:25:58.665 "rw_ios_per_sec": 0, 00:25:58.665 "rw_mbytes_per_sec": 0, 00:25:58.665 "r_mbytes_per_sec": 0, 00:25:58.665 "w_mbytes_per_sec": 0 00:25:58.665 }, 00:25:58.665 "claimed": true, 00:25:58.665 "claim_type": "exclusive_write", 00:25:58.665 "zoned": false, 00:25:58.665 "supported_io_types": { 00:25:58.665 "read": true, 00:25:58.665 "write": true, 00:25:58.665 "unmap": true, 00:25:58.665 "flush": true, 00:25:58.665 "reset": true, 00:25:58.665 "nvme_admin": false, 00:25:58.665 "nvme_io": false, 00:25:58.665 "nvme_io_md": false, 00:25:58.665 "write_zeroes": true, 00:25:58.665 "zcopy": true, 00:25:58.665 "get_zone_info": false, 00:25:58.665 "zone_management": false, 00:25:58.665 "zone_append": false, 00:25:58.665 "compare": false, 00:25:58.665 "compare_and_write": false, 00:25:58.665 "abort": true, 00:25:58.665 "seek_hole": false, 00:25:58.665 "seek_data": false, 00:25:58.665 "copy": true, 00:25:58.665 "nvme_iov_md": false 00:25:58.665 }, 00:25:58.665 "memory_domains": [ 00:25:58.665 { 00:25:58.665 "dma_device_id": "system", 00:25:58.665 "dma_device_type": 1 00:25:58.665 }, 00:25:58.665 { 00:25:58.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.665 "dma_device_type": 2 00:25:58.665 } 00:25:58.665 ], 00:25:58.665 "driver_specific": {} 00:25:58.665 } 00:25:58.665 ] 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.665 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.924 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.924 "name": "Existed_Raid", 00:25:58.924 "uuid": "1c9c23b6-950c-4812-9589-02c053f488f7", 00:25:58.924 "strip_size_kb": 64, 00:25:58.924 "state": "configuring", 00:25:58.924 "raid_level": "raid5f", 00:25:58.924 "superblock": true, 00:25:58.924 "num_base_bdevs": 3, 00:25:58.924 "num_base_bdevs_discovered": 2, 00:25:58.924 "num_base_bdevs_operational": 3, 00:25:58.924 "base_bdevs_list": [ 00:25:58.924 { 00:25:58.924 "name": "BaseBdev1", 00:25:58.924 "uuid": "52d9269d-4b82-41f8-bb6a-592da34fc5ba", 00:25:58.924 "is_configured": true, 00:25:58.924 "data_offset": 2048, 00:25:58.924 "data_size": 63488 00:25:58.924 }, 00:25:58.924 { 00:25:58.924 "name": null, 00:25:58.924 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:25:58.924 "is_configured": false, 00:25:58.924 "data_offset": 2048, 00:25:58.924 "data_size": 63488 00:25:58.924 }, 00:25:58.924 { 00:25:58.924 "name": "BaseBdev3", 00:25:58.924 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:25:58.924 "is_configured": true, 00:25:58.924 "data_offset": 2048, 00:25:58.924 "data_size": 63488 00:25:58.924 } 00:25:58.924 ] 00:25:58.924 }' 00:25:58.924 11:37:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.924 11:37:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.857 11:37:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.857 11:37:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:59.857 11:37:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:59.857 11:37:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:00.423 [2024-07-25 11:37:16.010126] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.423 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.681 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.681 "name": "Existed_Raid", 00:26:00.681 "uuid": "1c9c23b6-950c-4812-9589-02c053f488f7", 00:26:00.681 "strip_size_kb": 64, 00:26:00.681 "state": "configuring", 00:26:00.681 "raid_level": "raid5f", 00:26:00.681 "superblock": true, 00:26:00.681 "num_base_bdevs": 3, 00:26:00.681 "num_base_bdevs_discovered": 1, 00:26:00.681 "num_base_bdevs_operational": 3, 00:26:00.681 "base_bdevs_list": [ 00:26:00.681 { 00:26:00.681 "name": "BaseBdev1", 00:26:00.681 "uuid": "52d9269d-4b82-41f8-bb6a-592da34fc5ba", 00:26:00.681 "is_configured": true, 00:26:00.681 "data_offset": 2048, 00:26:00.681 "data_size": 63488 00:26:00.681 }, 00:26:00.681 { 00:26:00.681 "name": null, 00:26:00.681 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:26:00.681 "is_configured": false, 00:26:00.681 "data_offset": 2048, 00:26:00.681 "data_size": 63488 00:26:00.681 }, 00:26:00.681 { 00:26:00.681 "name": null, 00:26:00.681 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:26:00.681 "is_configured": false, 00:26:00.681 "data_offset": 2048, 00:26:00.681 "data_size": 63488 00:26:00.681 } 00:26:00.681 ] 00:26:00.681 }' 00:26:00.681 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.681 11:37:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.247 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.247 11:37:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:01.548 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:01.548 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:01.806 [2024-07-25 11:37:17.578719] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.806 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.064 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:02.064 "name": "Existed_Raid", 00:26:02.064 "uuid": "1c9c23b6-950c-4812-9589-02c053f488f7", 00:26:02.064 "strip_size_kb": 64, 00:26:02.064 "state": "configuring", 00:26:02.064 "raid_level": "raid5f", 00:26:02.064 "superblock": true, 00:26:02.064 "num_base_bdevs": 3, 00:26:02.064 "num_base_bdevs_discovered": 2, 00:26:02.064 "num_base_bdevs_operational": 3, 00:26:02.064 "base_bdevs_list": [ 00:26:02.064 { 00:26:02.064 "name": "BaseBdev1", 00:26:02.064 "uuid": "52d9269d-4b82-41f8-bb6a-592da34fc5ba", 00:26:02.064 "is_configured": true, 00:26:02.064 "data_offset": 2048, 00:26:02.064 "data_size": 63488 00:26:02.064 }, 00:26:02.064 { 00:26:02.064 "name": null, 00:26:02.064 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:26:02.064 "is_configured": false, 00:26:02.064 "data_offset": 2048, 00:26:02.064 "data_size": 63488 00:26:02.064 }, 00:26:02.064 { 00:26:02.064 "name": "BaseBdev3", 00:26:02.064 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:26:02.064 "is_configured": true, 00:26:02.064 "data_offset": 2048, 00:26:02.064 "data_size": 63488 00:26:02.064 } 00:26:02.064 ] 00:26:02.064 }' 00:26:02.064 11:37:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:02.064 11:37:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.999 11:37:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.999 11:37:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:02.999 11:37:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:02.999 11:37:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:03.257 [2024-07-25 11:37:19.043308] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.515 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.773 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:03.773 "name": "Existed_Raid", 00:26:03.773 "uuid": "1c9c23b6-950c-4812-9589-02c053f488f7", 00:26:03.773 "strip_size_kb": 64, 00:26:03.773 "state": "configuring", 00:26:03.773 "raid_level": "raid5f", 00:26:03.773 "superblock": true, 00:26:03.773 "num_base_bdevs": 3, 00:26:03.773 "num_base_bdevs_discovered": 1, 00:26:03.773 "num_base_bdevs_operational": 3, 00:26:03.773 "base_bdevs_list": [ 00:26:03.773 { 00:26:03.774 "name": null, 00:26:03.774 "uuid": "52d9269d-4b82-41f8-bb6a-592da34fc5ba", 00:26:03.774 "is_configured": false, 00:26:03.774 "data_offset": 2048, 00:26:03.774 "data_size": 63488 00:26:03.774 }, 00:26:03.774 { 00:26:03.774 "name": null, 00:26:03.774 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:26:03.774 "is_configured": false, 00:26:03.774 "data_offset": 2048, 00:26:03.774 "data_size": 63488 00:26:03.774 }, 00:26:03.774 { 00:26:03.774 "name": "BaseBdev3", 00:26:03.774 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:26:03.774 "is_configured": true, 00:26:03.774 "data_offset": 2048, 00:26:03.774 "data_size": 63488 00:26:03.774 } 00:26:03.774 ] 00:26:03.774 }' 00:26:03.774 11:37:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:03.774 11:37:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.337 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:04.337 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.594 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:04.594 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:04.853 [2024-07-25 11:37:20.546115] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.853 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.148 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:05.148 "name": "Existed_Raid", 00:26:05.148 "uuid": "1c9c23b6-950c-4812-9589-02c053f488f7", 00:26:05.148 "strip_size_kb": 64, 00:26:05.148 "state": "configuring", 00:26:05.148 "raid_level": "raid5f", 00:26:05.148 "superblock": true, 00:26:05.148 "num_base_bdevs": 3, 00:26:05.148 "num_base_bdevs_discovered": 2, 00:26:05.148 "num_base_bdevs_operational": 3, 00:26:05.148 "base_bdevs_list": [ 00:26:05.148 { 00:26:05.148 "name": null, 00:26:05.148 "uuid": "52d9269d-4b82-41f8-bb6a-592da34fc5ba", 00:26:05.148 "is_configured": false, 00:26:05.148 "data_offset": 2048, 00:26:05.148 "data_size": 63488 00:26:05.148 }, 00:26:05.148 { 00:26:05.148 "name": "BaseBdev2", 00:26:05.148 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:26:05.148 "is_configured": true, 00:26:05.148 "data_offset": 2048, 00:26:05.148 "data_size": 63488 00:26:05.148 }, 00:26:05.148 { 00:26:05.148 "name": "BaseBdev3", 00:26:05.148 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:26:05.148 "is_configured": true, 00:26:05.148 "data_offset": 2048, 00:26:05.148 "data_size": 63488 00:26:05.148 } 00:26:05.148 ] 00:26:05.148 }' 00:26:05.148 11:37:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:05.148 11:37:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:05.713 11:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.713 11:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:06.279 11:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:06.279 11:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.279 11:37:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:06.279 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 52d9269d-4b82-41f8-bb6a-592da34fc5ba 00:26:06.538 [2024-07-25 11:37:22.414837] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:06.538 [2024-07-25 11:37:22.415153] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:06.538 [2024-07-25 11:37:22.415173] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:06.538 [2024-07-25 11:37:22.415496] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:06.538 NewBaseBdev 00:26:06.796 [2024-07-25 11:37:22.420518] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:06.796 [2024-07-25 11:37:22.420559] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:06.796 [2024-07-25 11:37:22.420823] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:06.796 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:06.796 11:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:26:06.796 11:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:06.796 11:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:06.796 11:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:06.796 11:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:06.796 11:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:07.055 [ 00:26:07.055 { 00:26:07.055 "name": "NewBaseBdev", 00:26:07.055 "aliases": [ 00:26:07.055 "52d9269d-4b82-41f8-bb6a-592da34fc5ba" 00:26:07.055 ], 00:26:07.055 "product_name": "Malloc disk", 00:26:07.055 "block_size": 512, 00:26:07.055 "num_blocks": 65536, 00:26:07.055 "uuid": "52d9269d-4b82-41f8-bb6a-592da34fc5ba", 00:26:07.055 "assigned_rate_limits": { 00:26:07.055 "rw_ios_per_sec": 0, 00:26:07.055 "rw_mbytes_per_sec": 0, 00:26:07.055 "r_mbytes_per_sec": 0, 00:26:07.055 "w_mbytes_per_sec": 0 00:26:07.055 }, 00:26:07.055 "claimed": true, 00:26:07.055 "claim_type": "exclusive_write", 00:26:07.055 "zoned": false, 00:26:07.055 "supported_io_types": { 00:26:07.055 "read": true, 00:26:07.055 "write": true, 00:26:07.055 "unmap": true, 00:26:07.055 "flush": true, 00:26:07.055 "reset": true, 00:26:07.055 "nvme_admin": false, 00:26:07.055 "nvme_io": false, 00:26:07.055 "nvme_io_md": false, 00:26:07.055 "write_zeroes": true, 00:26:07.055 "zcopy": true, 00:26:07.055 "get_zone_info": false, 00:26:07.055 "zone_management": false, 00:26:07.055 "zone_append": false, 00:26:07.055 "compare": false, 00:26:07.055 "compare_and_write": false, 00:26:07.055 "abort": true, 00:26:07.055 "seek_hole": false, 00:26:07.055 "seek_data": false, 00:26:07.055 "copy": true, 00:26:07.055 "nvme_iov_md": false 00:26:07.055 }, 00:26:07.055 "memory_domains": [ 00:26:07.055 { 00:26:07.055 "dma_device_id": "system", 00:26:07.055 "dma_device_type": 1 00:26:07.055 }, 00:26:07.055 { 00:26:07.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.055 "dma_device_type": 2 00:26:07.055 } 00:26:07.055 ], 00:26:07.055 "driver_specific": {} 00:26:07.055 } 00:26:07.055 ] 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.055 11:37:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.325 11:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.325 "name": "Existed_Raid", 00:26:07.325 "uuid": "1c9c23b6-950c-4812-9589-02c053f488f7", 00:26:07.325 "strip_size_kb": 64, 00:26:07.325 "state": "online", 00:26:07.325 "raid_level": "raid5f", 00:26:07.325 "superblock": true, 00:26:07.325 "num_base_bdevs": 3, 00:26:07.325 "num_base_bdevs_discovered": 3, 00:26:07.325 "num_base_bdevs_operational": 3, 00:26:07.325 "base_bdevs_list": [ 00:26:07.325 { 00:26:07.325 "name": "NewBaseBdev", 00:26:07.325 "uuid": "52d9269d-4b82-41f8-bb6a-592da34fc5ba", 00:26:07.325 "is_configured": true, 00:26:07.325 "data_offset": 2048, 00:26:07.325 "data_size": 63488 00:26:07.325 }, 00:26:07.325 { 00:26:07.325 "name": "BaseBdev2", 00:26:07.325 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:26:07.325 "is_configured": true, 00:26:07.325 "data_offset": 2048, 00:26:07.325 "data_size": 63488 00:26:07.325 }, 00:26:07.325 { 00:26:07.325 "name": "BaseBdev3", 00:26:07.325 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:26:07.325 "is_configured": true, 00:26:07.325 "data_offset": 2048, 00:26:07.325 "data_size": 63488 00:26:07.325 } 00:26:07.325 ] 00:26:07.325 }' 00:26:07.325 11:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.325 11:37:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.260 11:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:08.260 11:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:08.260 11:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:08.260 11:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:08.260 11:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:08.260 11:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:08.260 11:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:08.260 11:37:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:08.260 [2024-07-25 11:37:24.120791] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:08.518 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:08.518 "name": "Existed_Raid", 00:26:08.518 "aliases": [ 00:26:08.518 "1c9c23b6-950c-4812-9589-02c053f488f7" 00:26:08.518 ], 00:26:08.518 "product_name": "Raid Volume", 00:26:08.518 "block_size": 512, 00:26:08.518 "num_blocks": 126976, 00:26:08.518 "uuid": "1c9c23b6-950c-4812-9589-02c053f488f7", 00:26:08.518 "assigned_rate_limits": { 00:26:08.518 "rw_ios_per_sec": 0, 00:26:08.518 "rw_mbytes_per_sec": 0, 00:26:08.518 "r_mbytes_per_sec": 0, 00:26:08.518 "w_mbytes_per_sec": 0 00:26:08.518 }, 00:26:08.518 "claimed": false, 00:26:08.518 "zoned": false, 00:26:08.518 "supported_io_types": { 00:26:08.518 "read": true, 00:26:08.518 "write": true, 00:26:08.518 "unmap": false, 00:26:08.518 "flush": false, 00:26:08.518 "reset": true, 00:26:08.518 "nvme_admin": false, 00:26:08.518 "nvme_io": false, 00:26:08.518 "nvme_io_md": false, 00:26:08.518 "write_zeroes": true, 00:26:08.518 "zcopy": false, 00:26:08.518 "get_zone_info": false, 00:26:08.518 "zone_management": false, 00:26:08.518 "zone_append": false, 00:26:08.518 "compare": false, 00:26:08.518 "compare_and_write": false, 00:26:08.518 "abort": false, 00:26:08.518 "seek_hole": false, 00:26:08.518 "seek_data": false, 00:26:08.518 "copy": false, 00:26:08.518 "nvme_iov_md": false 00:26:08.518 }, 00:26:08.518 "driver_specific": { 00:26:08.518 "raid": { 00:26:08.518 "uuid": "1c9c23b6-950c-4812-9589-02c053f488f7", 00:26:08.518 "strip_size_kb": 64, 00:26:08.518 "state": "online", 00:26:08.518 "raid_level": "raid5f", 00:26:08.518 "superblock": true, 00:26:08.518 "num_base_bdevs": 3, 00:26:08.518 "num_base_bdevs_discovered": 3, 00:26:08.518 "num_base_bdevs_operational": 3, 00:26:08.518 "base_bdevs_list": [ 00:26:08.518 { 00:26:08.518 "name": "NewBaseBdev", 00:26:08.518 "uuid": "52d9269d-4b82-41f8-bb6a-592da34fc5ba", 00:26:08.518 "is_configured": true, 00:26:08.518 "data_offset": 2048, 00:26:08.518 "data_size": 63488 00:26:08.518 }, 00:26:08.518 { 00:26:08.518 "name": "BaseBdev2", 00:26:08.518 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:26:08.518 "is_configured": true, 00:26:08.518 "data_offset": 2048, 00:26:08.518 "data_size": 63488 00:26:08.518 }, 00:26:08.518 { 00:26:08.518 "name": "BaseBdev3", 00:26:08.518 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:26:08.518 "is_configured": true, 00:26:08.518 "data_offset": 2048, 00:26:08.518 "data_size": 63488 00:26:08.518 } 00:26:08.518 ] 00:26:08.518 } 00:26:08.518 } 00:26:08.518 }' 00:26:08.518 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:08.518 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:08.518 BaseBdev2 00:26:08.518 BaseBdev3' 00:26:08.518 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:08.518 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:08.518 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:08.776 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:08.776 "name": "NewBaseBdev", 00:26:08.776 "aliases": [ 00:26:08.776 "52d9269d-4b82-41f8-bb6a-592da34fc5ba" 00:26:08.776 ], 00:26:08.776 "product_name": "Malloc disk", 00:26:08.776 "block_size": 512, 00:26:08.776 "num_blocks": 65536, 00:26:08.776 "uuid": "52d9269d-4b82-41f8-bb6a-592da34fc5ba", 00:26:08.776 "assigned_rate_limits": { 00:26:08.776 "rw_ios_per_sec": 0, 00:26:08.776 "rw_mbytes_per_sec": 0, 00:26:08.776 "r_mbytes_per_sec": 0, 00:26:08.776 "w_mbytes_per_sec": 0 00:26:08.776 }, 00:26:08.776 "claimed": true, 00:26:08.776 "claim_type": "exclusive_write", 00:26:08.776 "zoned": false, 00:26:08.776 "supported_io_types": { 00:26:08.776 "read": true, 00:26:08.776 "write": true, 00:26:08.776 "unmap": true, 00:26:08.776 "flush": true, 00:26:08.776 "reset": true, 00:26:08.776 "nvme_admin": false, 00:26:08.776 "nvme_io": false, 00:26:08.776 "nvme_io_md": false, 00:26:08.776 "write_zeroes": true, 00:26:08.776 "zcopy": true, 00:26:08.776 "get_zone_info": false, 00:26:08.776 "zone_management": false, 00:26:08.776 "zone_append": false, 00:26:08.776 "compare": false, 00:26:08.776 "compare_and_write": false, 00:26:08.776 "abort": true, 00:26:08.776 "seek_hole": false, 00:26:08.776 "seek_data": false, 00:26:08.776 "copy": true, 00:26:08.776 "nvme_iov_md": false 00:26:08.776 }, 00:26:08.776 "memory_domains": [ 00:26:08.776 { 00:26:08.776 "dma_device_id": "system", 00:26:08.776 "dma_device_type": 1 00:26:08.776 }, 00:26:08.776 { 00:26:08.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.776 "dma_device_type": 2 00:26:08.776 } 00:26:08.776 ], 00:26:08.776 "driver_specific": {} 00:26:08.776 }' 00:26:08.776 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.776 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.776 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:08.776 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.777 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.777 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:08.777 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.777 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.034 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:09.034 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.034 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.034 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:09.034 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:09.034 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:09.034 11:37:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:09.293 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:09.293 "name": "BaseBdev2", 00:26:09.293 "aliases": [ 00:26:09.293 "98f5f66e-773a-468d-ad61-10524c74ba52" 00:26:09.293 ], 00:26:09.293 "product_name": "Malloc disk", 00:26:09.293 "block_size": 512, 00:26:09.293 "num_blocks": 65536, 00:26:09.293 "uuid": "98f5f66e-773a-468d-ad61-10524c74ba52", 00:26:09.293 "assigned_rate_limits": { 00:26:09.293 "rw_ios_per_sec": 0, 00:26:09.293 "rw_mbytes_per_sec": 0, 00:26:09.293 "r_mbytes_per_sec": 0, 00:26:09.293 "w_mbytes_per_sec": 0 00:26:09.293 }, 00:26:09.293 "claimed": true, 00:26:09.293 "claim_type": "exclusive_write", 00:26:09.293 "zoned": false, 00:26:09.293 "supported_io_types": { 00:26:09.293 "read": true, 00:26:09.293 "write": true, 00:26:09.293 "unmap": true, 00:26:09.293 "flush": true, 00:26:09.293 "reset": true, 00:26:09.293 "nvme_admin": false, 00:26:09.293 "nvme_io": false, 00:26:09.293 "nvme_io_md": false, 00:26:09.293 "write_zeroes": true, 00:26:09.293 "zcopy": true, 00:26:09.293 "get_zone_info": false, 00:26:09.293 "zone_management": false, 00:26:09.293 "zone_append": false, 00:26:09.293 "compare": false, 00:26:09.293 "compare_and_write": false, 00:26:09.293 "abort": true, 00:26:09.293 "seek_hole": false, 00:26:09.293 "seek_data": false, 00:26:09.293 "copy": true, 00:26:09.293 "nvme_iov_md": false 00:26:09.293 }, 00:26:09.293 "memory_domains": [ 00:26:09.293 { 00:26:09.293 "dma_device_id": "system", 00:26:09.293 "dma_device_type": 1 00:26:09.293 }, 00:26:09.293 { 00:26:09.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.293 "dma_device_type": 2 00:26:09.293 } 00:26:09.293 ], 00:26:09.293 "driver_specific": {} 00:26:09.293 }' 00:26:09.293 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.293 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.551 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:09.551 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.551 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.551 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:09.551 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.552 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.552 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:09.552 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.552 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.809 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:09.809 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:09.809 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:09.809 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:10.066 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:10.066 "name": "BaseBdev3", 00:26:10.066 "aliases": [ 00:26:10.066 "1ec13ecc-9361-460b-b5c3-032120c8a5e1" 00:26:10.066 ], 00:26:10.066 "product_name": "Malloc disk", 00:26:10.066 "block_size": 512, 00:26:10.066 "num_blocks": 65536, 00:26:10.066 "uuid": "1ec13ecc-9361-460b-b5c3-032120c8a5e1", 00:26:10.066 "assigned_rate_limits": { 00:26:10.066 "rw_ios_per_sec": 0, 00:26:10.066 "rw_mbytes_per_sec": 0, 00:26:10.066 "r_mbytes_per_sec": 0, 00:26:10.066 "w_mbytes_per_sec": 0 00:26:10.066 }, 00:26:10.066 "claimed": true, 00:26:10.066 "claim_type": "exclusive_write", 00:26:10.066 "zoned": false, 00:26:10.066 "supported_io_types": { 00:26:10.066 "read": true, 00:26:10.066 "write": true, 00:26:10.066 "unmap": true, 00:26:10.066 "flush": true, 00:26:10.066 "reset": true, 00:26:10.066 "nvme_admin": false, 00:26:10.066 "nvme_io": false, 00:26:10.066 "nvme_io_md": false, 00:26:10.067 "write_zeroes": true, 00:26:10.067 "zcopy": true, 00:26:10.067 "get_zone_info": false, 00:26:10.067 "zone_management": false, 00:26:10.067 "zone_append": false, 00:26:10.067 "compare": false, 00:26:10.067 "compare_and_write": false, 00:26:10.067 "abort": true, 00:26:10.067 "seek_hole": false, 00:26:10.067 "seek_data": false, 00:26:10.067 "copy": true, 00:26:10.067 "nvme_iov_md": false 00:26:10.067 }, 00:26:10.067 "memory_domains": [ 00:26:10.067 { 00:26:10.067 "dma_device_id": "system", 00:26:10.067 "dma_device_type": 1 00:26:10.067 }, 00:26:10.067 { 00:26:10.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.067 "dma_device_type": 2 00:26:10.067 } 00:26:10.067 ], 00:26:10.067 "driver_specific": {} 00:26:10.067 }' 00:26:10.067 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.067 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.067 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:10.067 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.067 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.067 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:10.067 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.324 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.324 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:10.324 11:37:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.324 11:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.324 11:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:10.324 11:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:10.583 [2024-07-25 11:37:26.308336] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:10.583 [2024-07-25 11:37:26.308382] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:10.583 [2024-07-25 11:37:26.308493] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:10.583 [2024-07-25 11:37:26.308924] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:10.583 [2024-07-25 11:37:26.308950] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 92350 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92350 ']' 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 92350 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92350 00:26:10.583 killing process with pid 92350 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92350' 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 92350 00:26:10.583 [2024-07-25 11:37:26.351682] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:10.583 11:37:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 92350 00:26:10.841 [2024-07-25 11:37:26.623571] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:12.221 ************************************ 00:26:12.221 END TEST raid5f_state_function_test_sb 00:26:12.221 ************************************ 00:26:12.221 11:37:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:12.221 00:26:12.221 real 0m32.945s 00:26:12.221 user 1m0.301s 00:26:12.221 sys 0m4.332s 00:26:12.221 11:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:12.221 11:37:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:12.221 11:37:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:26:12.221 11:37:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:12.221 11:37:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:12.221 11:37:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:12.221 ************************************ 00:26:12.221 START TEST raid5f_superblock_test 00:26:12.221 ************************************ 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid5f 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid5f '!=' raid1 ']' 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=93309 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 93309 /var/tmp/spdk-raid.sock 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 93309 ']' 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:12.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:12.221 11:37:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.221 [2024-07-25 11:37:27.967343] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:26:12.221 [2024-07-25 11:37:27.967507] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93309 ] 00:26:12.479 [2024-07-25 11:37:28.131546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.738 [2024-07-25 11:37:28.365550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.738 [2024-07-25 11:37:28.565841] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:12.738 [2024-07-25 11:37:28.565923] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:12.996 11:37:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:13.254 malloc1 00:26:13.254 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:13.512 [2024-07-25 11:37:29.301217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:13.512 [2024-07-25 11:37:29.301315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.512 [2024-07-25 11:37:29.301344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:13.512 [2024-07-25 11:37:29.301362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.512 [2024-07-25 11:37:29.304149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.512 [2024-07-25 11:37:29.304201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:13.512 pt1 00:26:13.512 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:13.512 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:13.512 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:26:13.512 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:26:13.512 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:13.512 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:13.512 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:13.512 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:13.512 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:13.770 malloc2 00:26:13.770 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:14.028 [2024-07-25 11:37:29.803978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:14.028 [2024-07-25 11:37:29.804078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.028 [2024-07-25 11:37:29.804107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:14.028 [2024-07-25 11:37:29.804130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.028 [2024-07-25 11:37:29.806908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.028 [2024-07-25 11:37:29.806978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:14.028 pt2 00:26:14.028 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:14.028 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:14.028 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:26:14.028 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:26:14.028 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:14.028 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:14.028 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:14.028 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:14.028 11:37:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:14.296 malloc3 00:26:14.296 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:14.556 [2024-07-25 11:37:30.307550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:14.556 [2024-07-25 11:37:30.307905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.556 [2024-07-25 11:37:30.307982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:14.556 [2024-07-25 11:37:30.308114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.556 [2024-07-25 11:37:30.310957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.556 [2024-07-25 11:37:30.311133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:14.556 pt3 00:26:14.556 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:14.556 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:14.556 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:26:14.814 [2024-07-25 11:37:30.543772] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:14.814 [2024-07-25 11:37:30.546163] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:14.814 [2024-07-25 11:37:30.546252] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:14.814 [2024-07-25 11:37:30.546506] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:14.814 [2024-07-25 11:37:30.546525] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:14.814 [2024-07-25 11:37:30.546920] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:14.814 [2024-07-25 11:37:30.552244] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:14.814 [2024-07-25 11:37:30.552404] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:14.814 [2024-07-25 11:37:30.552834] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.814 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.815 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:15.075 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.075 "name": "raid_bdev1", 00:26:15.075 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:15.075 "strip_size_kb": 64, 00:26:15.075 "state": "online", 00:26:15.075 "raid_level": "raid5f", 00:26:15.075 "superblock": true, 00:26:15.075 "num_base_bdevs": 3, 00:26:15.075 "num_base_bdevs_discovered": 3, 00:26:15.075 "num_base_bdevs_operational": 3, 00:26:15.075 "base_bdevs_list": [ 00:26:15.075 { 00:26:15.075 "name": "pt1", 00:26:15.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:15.075 "is_configured": true, 00:26:15.075 "data_offset": 2048, 00:26:15.075 "data_size": 63488 00:26:15.075 }, 00:26:15.075 { 00:26:15.075 "name": "pt2", 00:26:15.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:15.075 "is_configured": true, 00:26:15.075 "data_offset": 2048, 00:26:15.075 "data_size": 63488 00:26:15.075 }, 00:26:15.075 { 00:26:15.075 "name": "pt3", 00:26:15.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:15.075 "is_configured": true, 00:26:15.075 "data_offset": 2048, 00:26:15.075 "data_size": 63488 00:26:15.075 } 00:26:15.075 ] 00:26:15.075 }' 00:26:15.075 11:37:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.075 11:37:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.640 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:26:15.640 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:15.640 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:15.640 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:15.641 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:15.641 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:15.641 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:15.641 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:15.898 [2024-07-25 11:37:31.671062] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:15.898 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:15.898 "name": "raid_bdev1", 00:26:15.898 "aliases": [ 00:26:15.898 "119271d6-ab84-4624-9b33-1b7d7d87a157" 00:26:15.898 ], 00:26:15.898 "product_name": "Raid Volume", 00:26:15.898 "block_size": 512, 00:26:15.898 "num_blocks": 126976, 00:26:15.898 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:15.898 "assigned_rate_limits": { 00:26:15.898 "rw_ios_per_sec": 0, 00:26:15.898 "rw_mbytes_per_sec": 0, 00:26:15.898 "r_mbytes_per_sec": 0, 00:26:15.898 "w_mbytes_per_sec": 0 00:26:15.898 }, 00:26:15.898 "claimed": false, 00:26:15.898 "zoned": false, 00:26:15.898 "supported_io_types": { 00:26:15.898 "read": true, 00:26:15.898 "write": true, 00:26:15.898 "unmap": false, 00:26:15.898 "flush": false, 00:26:15.898 "reset": true, 00:26:15.898 "nvme_admin": false, 00:26:15.898 "nvme_io": false, 00:26:15.898 "nvme_io_md": false, 00:26:15.898 "write_zeroes": true, 00:26:15.898 "zcopy": false, 00:26:15.898 "get_zone_info": false, 00:26:15.898 "zone_management": false, 00:26:15.898 "zone_append": false, 00:26:15.898 "compare": false, 00:26:15.898 "compare_and_write": false, 00:26:15.898 "abort": false, 00:26:15.898 "seek_hole": false, 00:26:15.898 "seek_data": false, 00:26:15.898 "copy": false, 00:26:15.898 "nvme_iov_md": false 00:26:15.898 }, 00:26:15.898 "driver_specific": { 00:26:15.898 "raid": { 00:26:15.898 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:15.898 "strip_size_kb": 64, 00:26:15.898 "state": "online", 00:26:15.898 "raid_level": "raid5f", 00:26:15.898 "superblock": true, 00:26:15.898 "num_base_bdevs": 3, 00:26:15.898 "num_base_bdevs_discovered": 3, 00:26:15.898 "num_base_bdevs_operational": 3, 00:26:15.898 "base_bdevs_list": [ 00:26:15.898 { 00:26:15.898 "name": "pt1", 00:26:15.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:15.898 "is_configured": true, 00:26:15.898 "data_offset": 2048, 00:26:15.898 "data_size": 63488 00:26:15.898 }, 00:26:15.898 { 00:26:15.898 "name": "pt2", 00:26:15.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:15.899 "is_configured": true, 00:26:15.899 "data_offset": 2048, 00:26:15.899 "data_size": 63488 00:26:15.899 }, 00:26:15.899 { 00:26:15.899 "name": "pt3", 00:26:15.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:15.899 "is_configured": true, 00:26:15.899 "data_offset": 2048, 00:26:15.899 "data_size": 63488 00:26:15.899 } 00:26:15.899 ] 00:26:15.899 } 00:26:15.899 } 00:26:15.899 }' 00:26:15.899 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:15.899 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:15.899 pt2 00:26:15.899 pt3' 00:26:15.899 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:15.899 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:15.899 11:37:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:16.157 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:16.157 "name": "pt1", 00:26:16.157 "aliases": [ 00:26:16.157 "00000000-0000-0000-0000-000000000001" 00:26:16.157 ], 00:26:16.157 "product_name": "passthru", 00:26:16.157 "block_size": 512, 00:26:16.157 "num_blocks": 65536, 00:26:16.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:16.157 "assigned_rate_limits": { 00:26:16.157 "rw_ios_per_sec": 0, 00:26:16.157 "rw_mbytes_per_sec": 0, 00:26:16.157 "r_mbytes_per_sec": 0, 00:26:16.157 "w_mbytes_per_sec": 0 00:26:16.157 }, 00:26:16.157 "claimed": true, 00:26:16.157 "claim_type": "exclusive_write", 00:26:16.157 "zoned": false, 00:26:16.157 "supported_io_types": { 00:26:16.157 "read": true, 00:26:16.157 "write": true, 00:26:16.157 "unmap": true, 00:26:16.157 "flush": true, 00:26:16.157 "reset": true, 00:26:16.157 "nvme_admin": false, 00:26:16.157 "nvme_io": false, 00:26:16.157 "nvme_io_md": false, 00:26:16.157 "write_zeroes": true, 00:26:16.157 "zcopy": true, 00:26:16.157 "get_zone_info": false, 00:26:16.157 "zone_management": false, 00:26:16.157 "zone_append": false, 00:26:16.157 "compare": false, 00:26:16.157 "compare_and_write": false, 00:26:16.157 "abort": true, 00:26:16.157 "seek_hole": false, 00:26:16.157 "seek_data": false, 00:26:16.157 "copy": true, 00:26:16.157 "nvme_iov_md": false 00:26:16.157 }, 00:26:16.157 "memory_domains": [ 00:26:16.157 { 00:26:16.157 "dma_device_id": "system", 00:26:16.157 "dma_device_type": 1 00:26:16.157 }, 00:26:16.157 { 00:26:16.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.157 "dma_device_type": 2 00:26:16.157 } 00:26:16.157 ], 00:26:16.157 "driver_specific": { 00:26:16.157 "passthru": { 00:26:16.157 "name": "pt1", 00:26:16.157 "base_bdev_name": "malloc1" 00:26:16.157 } 00:26:16.157 } 00:26:16.157 }' 00:26:16.157 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.415 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.415 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:16.415 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.415 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.415 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:16.415 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:16.415 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:16.672 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:16.672 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:16.672 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:16.672 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:16.672 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:16.672 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:16.672 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:16.931 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:16.931 "name": "pt2", 00:26:16.931 "aliases": [ 00:26:16.931 "00000000-0000-0000-0000-000000000002" 00:26:16.931 ], 00:26:16.931 "product_name": "passthru", 00:26:16.931 "block_size": 512, 00:26:16.931 "num_blocks": 65536, 00:26:16.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.931 "assigned_rate_limits": { 00:26:16.931 "rw_ios_per_sec": 0, 00:26:16.931 "rw_mbytes_per_sec": 0, 00:26:16.931 "r_mbytes_per_sec": 0, 00:26:16.931 "w_mbytes_per_sec": 0 00:26:16.931 }, 00:26:16.931 "claimed": true, 00:26:16.931 "claim_type": "exclusive_write", 00:26:16.931 "zoned": false, 00:26:16.931 "supported_io_types": { 00:26:16.931 "read": true, 00:26:16.931 "write": true, 00:26:16.931 "unmap": true, 00:26:16.931 "flush": true, 00:26:16.931 "reset": true, 00:26:16.931 "nvme_admin": false, 00:26:16.931 "nvme_io": false, 00:26:16.931 "nvme_io_md": false, 00:26:16.931 "write_zeroes": true, 00:26:16.931 "zcopy": true, 00:26:16.931 "get_zone_info": false, 00:26:16.931 "zone_management": false, 00:26:16.931 "zone_append": false, 00:26:16.931 "compare": false, 00:26:16.931 "compare_and_write": false, 00:26:16.931 "abort": true, 00:26:16.931 "seek_hole": false, 00:26:16.931 "seek_data": false, 00:26:16.931 "copy": true, 00:26:16.931 "nvme_iov_md": false 00:26:16.931 }, 00:26:16.931 "memory_domains": [ 00:26:16.931 { 00:26:16.931 "dma_device_id": "system", 00:26:16.931 "dma_device_type": 1 00:26:16.931 }, 00:26:16.931 { 00:26:16.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.931 "dma_device_type": 2 00:26:16.931 } 00:26:16.931 ], 00:26:16.931 "driver_specific": { 00:26:16.931 "passthru": { 00:26:16.931 "name": "pt2", 00:26:16.931 "base_bdev_name": "malloc2" 00:26:16.931 } 00:26:16.931 } 00:26:16.931 }' 00:26:16.931 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.931 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.931 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:16.931 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.931 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.189 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:17.189 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.189 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.189 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.189 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.189 11:37:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.189 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:17.189 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.189 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:17.189 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:17.447 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:17.447 "name": "pt3", 00:26:17.447 "aliases": [ 00:26:17.447 "00000000-0000-0000-0000-000000000003" 00:26:17.447 ], 00:26:17.447 "product_name": "passthru", 00:26:17.447 "block_size": 512, 00:26:17.447 "num_blocks": 65536, 00:26:17.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:17.447 "assigned_rate_limits": { 00:26:17.447 "rw_ios_per_sec": 0, 00:26:17.447 "rw_mbytes_per_sec": 0, 00:26:17.447 "r_mbytes_per_sec": 0, 00:26:17.447 "w_mbytes_per_sec": 0 00:26:17.447 }, 00:26:17.447 "claimed": true, 00:26:17.447 "claim_type": "exclusive_write", 00:26:17.447 "zoned": false, 00:26:17.447 "supported_io_types": { 00:26:17.447 "read": true, 00:26:17.447 "write": true, 00:26:17.447 "unmap": true, 00:26:17.447 "flush": true, 00:26:17.447 "reset": true, 00:26:17.447 "nvme_admin": false, 00:26:17.447 "nvme_io": false, 00:26:17.447 "nvme_io_md": false, 00:26:17.447 "write_zeroes": true, 00:26:17.447 "zcopy": true, 00:26:17.447 "get_zone_info": false, 00:26:17.447 "zone_management": false, 00:26:17.447 "zone_append": false, 00:26:17.447 "compare": false, 00:26:17.447 "compare_and_write": false, 00:26:17.447 "abort": true, 00:26:17.447 "seek_hole": false, 00:26:17.447 "seek_data": false, 00:26:17.447 "copy": true, 00:26:17.447 "nvme_iov_md": false 00:26:17.447 }, 00:26:17.447 "memory_domains": [ 00:26:17.447 { 00:26:17.447 "dma_device_id": "system", 00:26:17.447 "dma_device_type": 1 00:26:17.447 }, 00:26:17.447 { 00:26:17.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.447 "dma_device_type": 2 00:26:17.447 } 00:26:17.447 ], 00:26:17.447 "driver_specific": { 00:26:17.447 "passthru": { 00:26:17.447 "name": "pt3", 00:26:17.447 "base_bdev_name": "malloc3" 00:26:17.447 } 00:26:17.447 } 00:26:17.447 }' 00:26:17.447 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.705 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.705 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:17.705 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.705 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.705 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:17.705 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.705 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.705 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.705 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.963 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.963 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:17.963 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:26:17.963 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:18.221 [2024-07-25 11:37:33.903684] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:18.221 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=119271d6-ab84-4624-9b33-1b7d7d87a157 00:26:18.221 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 119271d6-ab84-4624-9b33-1b7d7d87a157 ']' 00:26:18.221 11:37:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:18.480 [2024-07-25 11:37:34.175528] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:18.480 [2024-07-25 11:37:34.175568] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:18.480 [2024-07-25 11:37:34.175664] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:18.480 [2024-07-25 11:37:34.175783] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:18.480 [2024-07-25 11:37:34.175804] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:18.480 11:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.480 11:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:26:18.738 11:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:26:18.738 11:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:26:18.738 11:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:18.738 11:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:18.997 11:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:18.997 11:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:19.255 11:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:19.255 11:37:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:19.514 11:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:19.514 11:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:19.772 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:19.772 [2024-07-25 11:37:35.651953] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:20.030 [2024-07-25 11:37:35.654508] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:20.030 [2024-07-25 11:37:35.654588] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:20.030 [2024-07-25 11:37:35.654674] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:20.030 [2024-07-25 11:37:35.654758] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:20.030 [2024-07-25 11:37:35.654796] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:20.030 [2024-07-25 11:37:35.654819] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:20.030 [2024-07-25 11:37:35.654837] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:20.030 request: 00:26:20.030 { 00:26:20.030 "name": "raid_bdev1", 00:26:20.030 "raid_level": "raid5f", 00:26:20.030 "base_bdevs": [ 00:26:20.030 "malloc1", 00:26:20.030 "malloc2", 00:26:20.030 "malloc3" 00:26:20.030 ], 00:26:20.030 "strip_size_kb": 64, 00:26:20.030 "superblock": false, 00:26:20.030 "method": "bdev_raid_create", 00:26:20.030 "req_id": 1 00:26:20.030 } 00:26:20.030 Got JSON-RPC error response 00:26:20.030 response: 00:26:20.030 { 00:26:20.030 "code": -17, 00:26:20.030 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:20.030 } 00:26:20.030 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:26:20.030 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:20.030 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:20.030 11:37:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:20.030 11:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:26:20.030 11:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.030 11:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:26:20.030 11:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:26:20.030 11:37:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:20.288 [2024-07-25 11:37:36.112077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:20.288 [2024-07-25 11:37:36.112187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.288 [2024-07-25 11:37:36.112213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:20.288 [2024-07-25 11:37:36.112230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.288 [2024-07-25 11:37:36.115053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.288 [2024-07-25 11:37:36.115116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:20.288 [2024-07-25 11:37:36.115230] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:20.288 [2024-07-25 11:37:36.115296] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:20.288 pt1 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.288 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.546 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:20.546 "name": "raid_bdev1", 00:26:20.546 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:20.546 "strip_size_kb": 64, 00:26:20.546 "state": "configuring", 00:26:20.546 "raid_level": "raid5f", 00:26:20.546 "superblock": true, 00:26:20.546 "num_base_bdevs": 3, 00:26:20.546 "num_base_bdevs_discovered": 1, 00:26:20.546 "num_base_bdevs_operational": 3, 00:26:20.546 "base_bdevs_list": [ 00:26:20.546 { 00:26:20.546 "name": "pt1", 00:26:20.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:20.546 "is_configured": true, 00:26:20.547 "data_offset": 2048, 00:26:20.547 "data_size": 63488 00:26:20.547 }, 00:26:20.547 { 00:26:20.547 "name": null, 00:26:20.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:20.547 "is_configured": false, 00:26:20.547 "data_offset": 2048, 00:26:20.547 "data_size": 63488 00:26:20.547 }, 00:26:20.547 { 00:26:20.547 "name": null, 00:26:20.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:20.547 "is_configured": false, 00:26:20.547 "data_offset": 2048, 00:26:20.547 "data_size": 63488 00:26:20.547 } 00:26:20.547 ] 00:26:20.547 }' 00:26:20.547 11:37:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:20.547 11:37:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.490 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:26:21.490 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:21.490 [2024-07-25 11:37:37.308420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:21.490 [2024-07-25 11:37:37.308519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.490 [2024-07-25 11:37:37.308606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:21.490 [2024-07-25 11:37:37.308626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.490 [2024-07-25 11:37:37.309312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.490 [2024-07-25 11:37:37.309349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:21.490 [2024-07-25 11:37:37.309462] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:21.490 [2024-07-25 11:37:37.309501] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:21.490 pt2 00:26:21.490 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:21.760 [2024-07-25 11:37:37.524513] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.760 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.019 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:22.019 "name": "raid_bdev1", 00:26:22.019 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:22.019 "strip_size_kb": 64, 00:26:22.019 "state": "configuring", 00:26:22.019 "raid_level": "raid5f", 00:26:22.019 "superblock": true, 00:26:22.019 "num_base_bdevs": 3, 00:26:22.019 "num_base_bdevs_discovered": 1, 00:26:22.019 "num_base_bdevs_operational": 3, 00:26:22.019 "base_bdevs_list": [ 00:26:22.019 { 00:26:22.019 "name": "pt1", 00:26:22.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:22.019 "is_configured": true, 00:26:22.019 "data_offset": 2048, 00:26:22.019 "data_size": 63488 00:26:22.019 }, 00:26:22.019 { 00:26:22.019 "name": null, 00:26:22.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:22.019 "is_configured": false, 00:26:22.019 "data_offset": 2048, 00:26:22.019 "data_size": 63488 00:26:22.019 }, 00:26:22.019 { 00:26:22.019 "name": null, 00:26:22.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:22.019 "is_configured": false, 00:26:22.019 "data_offset": 2048, 00:26:22.019 "data_size": 63488 00:26:22.019 } 00:26:22.019 ] 00:26:22.019 }' 00:26:22.019 11:37:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:22.019 11:37:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.586 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:26:22.586 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:22.586 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:22.845 [2024-07-25 11:37:38.628781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:22.845 [2024-07-25 11:37:38.628881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.845 [2024-07-25 11:37:38.628930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:22.845 [2024-07-25 11:37:38.628944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.845 [2024-07-25 11:37:38.629519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.845 [2024-07-25 11:37:38.629558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:22.845 [2024-07-25 11:37:38.629654] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:22.845 [2024-07-25 11:37:38.630035] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:22.845 pt2 00:26:22.845 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:22.845 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:22.845 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:23.104 [2024-07-25 11:37:38.904876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:23.104 [2024-07-25 11:37:38.904978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.104 [2024-07-25 11:37:38.905009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:23.104 [2024-07-25 11:37:38.905023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.104 [2024-07-25 11:37:38.905603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.104 [2024-07-25 11:37:38.905628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:23.104 [2024-07-25 11:37:38.905766] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:23.104 [2024-07-25 11:37:38.905799] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:23.104 [2024-07-25 11:37:38.905976] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:23.104 [2024-07-25 11:37:38.905990] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:23.104 [2024-07-25 11:37:38.906309] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:23.104 [2024-07-25 11:37:38.911447] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:23.104 [2024-07-25 11:37:38.911475] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:23.104 [2024-07-25 11:37:38.911860] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:23.104 pt3 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.104 11:37:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.363 11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:23.363 "name": "raid_bdev1", 00:26:23.363 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:23.363 "strip_size_kb": 64, 00:26:23.363 "state": "online", 00:26:23.363 "raid_level": "raid5f", 00:26:23.363 "superblock": true, 00:26:23.363 "num_base_bdevs": 3, 00:26:23.363 "num_base_bdevs_discovered": 3, 00:26:23.363 "num_base_bdevs_operational": 3, 00:26:23.363 "base_bdevs_list": [ 00:26:23.363 { 00:26:23.363 "name": "pt1", 00:26:23.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:23.363 "is_configured": true, 00:26:23.363 "data_offset": 2048, 00:26:23.363 "data_size": 63488 00:26:23.363 }, 00:26:23.363 { 00:26:23.363 "name": "pt2", 00:26:23.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:23.363 "is_configured": true, 00:26:23.363 "data_offset": 2048, 00:26:23.363 "data_size": 63488 00:26:23.363 }, 00:26:23.363 { 00:26:23.363 "name": "pt3", 00:26:23.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:23.363 "is_configured": true, 00:26:23.363 "data_offset": 2048, 00:26:23.363 "data_size": 63488 00:26:23.363 } 00:26:23.363 ] 00:26:23.363 }' 00:26:23.363 11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:23.363 11:37:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.929 11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:26:23.929 11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:23.929 11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:23.929 11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:23.930 11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:23.930 11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:23.930 11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:23.930 11:37:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:24.188 [2024-07-25 11:37:40.058111] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:24.447 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:24.447 "name": "raid_bdev1", 00:26:24.447 "aliases": [ 00:26:24.447 "119271d6-ab84-4624-9b33-1b7d7d87a157" 00:26:24.447 ], 00:26:24.447 "product_name": "Raid Volume", 00:26:24.447 "block_size": 512, 00:26:24.447 "num_blocks": 126976, 00:26:24.447 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:24.447 "assigned_rate_limits": { 00:26:24.447 "rw_ios_per_sec": 0, 00:26:24.447 "rw_mbytes_per_sec": 0, 00:26:24.447 "r_mbytes_per_sec": 0, 00:26:24.447 "w_mbytes_per_sec": 0 00:26:24.447 }, 00:26:24.447 "claimed": false, 00:26:24.447 "zoned": false, 00:26:24.447 "supported_io_types": { 00:26:24.447 "read": true, 00:26:24.447 "write": true, 00:26:24.447 "unmap": false, 00:26:24.447 "flush": false, 00:26:24.447 "reset": true, 00:26:24.447 "nvme_admin": false, 00:26:24.447 "nvme_io": false, 00:26:24.447 "nvme_io_md": false, 00:26:24.447 "write_zeroes": true, 00:26:24.447 "zcopy": false, 00:26:24.447 "get_zone_info": false, 00:26:24.447 "zone_management": false, 00:26:24.447 "zone_append": false, 00:26:24.447 "compare": false, 00:26:24.447 "compare_and_write": false, 00:26:24.447 "abort": false, 00:26:24.447 "seek_hole": false, 00:26:24.447 "seek_data": false, 00:26:24.447 "copy": false, 00:26:24.447 "nvme_iov_md": false 00:26:24.447 }, 00:26:24.447 "driver_specific": { 00:26:24.447 "raid": { 00:26:24.447 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:24.447 "strip_size_kb": 64, 00:26:24.447 "state": "online", 00:26:24.447 "raid_level": "raid5f", 00:26:24.447 "superblock": true, 00:26:24.447 "num_base_bdevs": 3, 00:26:24.447 "num_base_bdevs_discovered": 3, 00:26:24.447 "num_base_bdevs_operational": 3, 00:26:24.447 "base_bdevs_list": [ 00:26:24.447 { 00:26:24.447 "name": "pt1", 00:26:24.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:24.447 "is_configured": true, 00:26:24.447 "data_offset": 2048, 00:26:24.447 "data_size": 63488 00:26:24.447 }, 00:26:24.447 { 00:26:24.447 "name": "pt2", 00:26:24.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:24.447 "is_configured": true, 00:26:24.447 "data_offset": 2048, 00:26:24.447 "data_size": 63488 00:26:24.447 }, 00:26:24.447 { 00:26:24.447 "name": "pt3", 00:26:24.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:24.447 "is_configured": true, 00:26:24.447 "data_offset": 2048, 00:26:24.447 "data_size": 63488 00:26:24.447 } 00:26:24.447 ] 00:26:24.447 } 00:26:24.447 } 00:26:24.447 }' 00:26:24.447 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:24.447 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:24.447 pt2 00:26:24.447 pt3' 00:26:24.447 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:24.447 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:24.447 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:24.705 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:24.705 "name": "pt1", 00:26:24.705 "aliases": [ 00:26:24.705 "00000000-0000-0000-0000-000000000001" 00:26:24.705 ], 00:26:24.705 "product_name": "passthru", 00:26:24.705 "block_size": 512, 00:26:24.705 "num_blocks": 65536, 00:26:24.705 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:24.705 "assigned_rate_limits": { 00:26:24.705 "rw_ios_per_sec": 0, 00:26:24.705 "rw_mbytes_per_sec": 0, 00:26:24.705 "r_mbytes_per_sec": 0, 00:26:24.705 "w_mbytes_per_sec": 0 00:26:24.705 }, 00:26:24.705 "claimed": true, 00:26:24.705 "claim_type": "exclusive_write", 00:26:24.705 "zoned": false, 00:26:24.705 "supported_io_types": { 00:26:24.705 "read": true, 00:26:24.705 "write": true, 00:26:24.705 "unmap": true, 00:26:24.705 "flush": true, 00:26:24.705 "reset": true, 00:26:24.705 "nvme_admin": false, 00:26:24.705 "nvme_io": false, 00:26:24.705 "nvme_io_md": false, 00:26:24.705 "write_zeroes": true, 00:26:24.705 "zcopy": true, 00:26:24.705 "get_zone_info": false, 00:26:24.705 "zone_management": false, 00:26:24.705 "zone_append": false, 00:26:24.705 "compare": false, 00:26:24.705 "compare_and_write": false, 00:26:24.705 "abort": true, 00:26:24.705 "seek_hole": false, 00:26:24.705 "seek_data": false, 00:26:24.705 "copy": true, 00:26:24.705 "nvme_iov_md": false 00:26:24.705 }, 00:26:24.705 "memory_domains": [ 00:26:24.705 { 00:26:24.705 "dma_device_id": "system", 00:26:24.705 "dma_device_type": 1 00:26:24.705 }, 00:26:24.705 { 00:26:24.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.705 "dma_device_type": 2 00:26:24.705 } 00:26:24.705 ], 00:26:24.705 "driver_specific": { 00:26:24.705 "passthru": { 00:26:24.705 "name": "pt1", 00:26:24.705 "base_bdev_name": "malloc1" 00:26:24.705 } 00:26:24.705 } 00:26:24.705 }' 00:26:24.705 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:24.705 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:24.705 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:24.705 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.705 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.963 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:24.963 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.963 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.963 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:24.963 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.963 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.963 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:24.963 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:24.963 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:24.963 11:37:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:25.253 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:25.253 "name": "pt2", 00:26:25.253 "aliases": [ 00:26:25.253 "00000000-0000-0000-0000-000000000002" 00:26:25.253 ], 00:26:25.253 "product_name": "passthru", 00:26:25.253 "block_size": 512, 00:26:25.253 "num_blocks": 65536, 00:26:25.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:25.253 "assigned_rate_limits": { 00:26:25.253 "rw_ios_per_sec": 0, 00:26:25.253 "rw_mbytes_per_sec": 0, 00:26:25.253 "r_mbytes_per_sec": 0, 00:26:25.253 "w_mbytes_per_sec": 0 00:26:25.253 }, 00:26:25.253 "claimed": true, 00:26:25.253 "claim_type": "exclusive_write", 00:26:25.253 "zoned": false, 00:26:25.253 "supported_io_types": { 00:26:25.253 "read": true, 00:26:25.253 "write": true, 00:26:25.253 "unmap": true, 00:26:25.253 "flush": true, 00:26:25.253 "reset": true, 00:26:25.253 "nvme_admin": false, 00:26:25.253 "nvme_io": false, 00:26:25.253 "nvme_io_md": false, 00:26:25.253 "write_zeroes": true, 00:26:25.253 "zcopy": true, 00:26:25.253 "get_zone_info": false, 00:26:25.253 "zone_management": false, 00:26:25.253 "zone_append": false, 00:26:25.253 "compare": false, 00:26:25.253 "compare_and_write": false, 00:26:25.253 "abort": true, 00:26:25.253 "seek_hole": false, 00:26:25.253 "seek_data": false, 00:26:25.253 "copy": true, 00:26:25.253 "nvme_iov_md": false 00:26:25.253 }, 00:26:25.253 "memory_domains": [ 00:26:25.253 { 00:26:25.253 "dma_device_id": "system", 00:26:25.253 "dma_device_type": 1 00:26:25.253 }, 00:26:25.253 { 00:26:25.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.253 "dma_device_type": 2 00:26:25.253 } 00:26:25.253 ], 00:26:25.253 "driver_specific": { 00:26:25.253 "passthru": { 00:26:25.253 "name": "pt2", 00:26:25.253 "base_bdev_name": "malloc2" 00:26:25.253 } 00:26:25.253 } 00:26:25.253 }' 00:26:25.253 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:25.253 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:25.511 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:25.511 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:25.511 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:25.511 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:25.511 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:25.511 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:25.511 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:25.511 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:25.511 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:25.770 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:25.770 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:25.770 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:25.770 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:26.027 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:26.027 "name": "pt3", 00:26:26.027 "aliases": [ 00:26:26.027 "00000000-0000-0000-0000-000000000003" 00:26:26.027 ], 00:26:26.027 "product_name": "passthru", 00:26:26.027 "block_size": 512, 00:26:26.027 "num_blocks": 65536, 00:26:26.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:26.027 "assigned_rate_limits": { 00:26:26.027 "rw_ios_per_sec": 0, 00:26:26.027 "rw_mbytes_per_sec": 0, 00:26:26.027 "r_mbytes_per_sec": 0, 00:26:26.027 "w_mbytes_per_sec": 0 00:26:26.027 }, 00:26:26.027 "claimed": true, 00:26:26.027 "claim_type": "exclusive_write", 00:26:26.027 "zoned": false, 00:26:26.027 "supported_io_types": { 00:26:26.027 "read": true, 00:26:26.027 "write": true, 00:26:26.027 "unmap": true, 00:26:26.027 "flush": true, 00:26:26.027 "reset": true, 00:26:26.027 "nvme_admin": false, 00:26:26.027 "nvme_io": false, 00:26:26.027 "nvme_io_md": false, 00:26:26.027 "write_zeroes": true, 00:26:26.027 "zcopy": true, 00:26:26.027 "get_zone_info": false, 00:26:26.027 "zone_management": false, 00:26:26.027 "zone_append": false, 00:26:26.027 "compare": false, 00:26:26.027 "compare_and_write": false, 00:26:26.027 "abort": true, 00:26:26.027 "seek_hole": false, 00:26:26.027 "seek_data": false, 00:26:26.027 "copy": true, 00:26:26.027 "nvme_iov_md": false 00:26:26.027 }, 00:26:26.027 "memory_domains": [ 00:26:26.027 { 00:26:26.027 "dma_device_id": "system", 00:26:26.027 "dma_device_type": 1 00:26:26.027 }, 00:26:26.027 { 00:26:26.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.027 "dma_device_type": 2 00:26:26.027 } 00:26:26.027 ], 00:26:26.027 "driver_specific": { 00:26:26.027 "passthru": { 00:26:26.027 "name": "pt3", 00:26:26.027 "base_bdev_name": "malloc3" 00:26:26.027 } 00:26:26.027 } 00:26:26.027 }' 00:26:26.027 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.027 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.027 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:26.027 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.027 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.285 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:26.285 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.285 11:37:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.285 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:26.285 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.285 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.285 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:26.285 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:26:26.285 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:26.543 [2024-07-25 11:37:42.350775] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:26.543 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 119271d6-ab84-4624-9b33-1b7d7d87a157 '!=' 119271d6-ab84-4624-9b33-1b7d7d87a157 ']' 00:26:26.543 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid5f 00:26:26.543 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:26.543 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:26.543 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:26.801 [2024-07-25 11:37:42.574681] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.801 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.060 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:27.060 "name": "raid_bdev1", 00:26:27.060 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:27.060 "strip_size_kb": 64, 00:26:27.060 "state": "online", 00:26:27.060 "raid_level": "raid5f", 00:26:27.060 "superblock": true, 00:26:27.060 "num_base_bdevs": 3, 00:26:27.060 "num_base_bdevs_discovered": 2, 00:26:27.060 "num_base_bdevs_operational": 2, 00:26:27.060 "base_bdevs_list": [ 00:26:27.060 { 00:26:27.060 "name": null, 00:26:27.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.060 "is_configured": false, 00:26:27.060 "data_offset": 2048, 00:26:27.060 "data_size": 63488 00:26:27.060 }, 00:26:27.060 { 00:26:27.060 "name": "pt2", 00:26:27.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:27.060 "is_configured": true, 00:26:27.060 "data_offset": 2048, 00:26:27.060 "data_size": 63488 00:26:27.060 }, 00:26:27.060 { 00:26:27.060 "name": "pt3", 00:26:27.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:27.060 "is_configured": true, 00:26:27.060 "data_offset": 2048, 00:26:27.060 "data_size": 63488 00:26:27.060 } 00:26:27.060 ] 00:26:27.060 }' 00:26:27.060 11:37:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:27.060 11:37:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.625 11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:27.882 [2024-07-25 11:37:43.686905] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:27.882 [2024-07-25 11:37:43.686968] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:27.882 [2024-07-25 11:37:43.687056] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:27.882 [2024-07-25 11:37:43.687148] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:27.882 [2024-07-25 11:37:43.687162] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:27.882 11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:26:27.882 11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.140 11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:26:28.140 11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:26:28.140 11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:28.140 11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:28.140 11:37:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:28.398 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:28.398 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:28.398 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:28.655 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:28.655 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:28.655 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:26:28.655 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:28.655 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:28.914 [2024-07-25 11:37:44.639169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:28.914 [2024-07-25 11:37:44.639266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:28.914 [2024-07-25 11:37:44.639301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:28.914 [2024-07-25 11:37:44.639317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:28.914 [2024-07-25 11:37:44.642114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:28.914 [2024-07-25 11:37:44.642161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:28.914 [2024-07-25 11:37:44.642277] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:28.914 [2024-07-25 11:37:44.642337] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:28.914 pt2 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.914 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.172 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.172 "name": "raid_bdev1", 00:26:29.172 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:29.172 "strip_size_kb": 64, 00:26:29.172 "state": "configuring", 00:26:29.172 "raid_level": "raid5f", 00:26:29.172 "superblock": true, 00:26:29.172 "num_base_bdevs": 3, 00:26:29.172 "num_base_bdevs_discovered": 1, 00:26:29.172 "num_base_bdevs_operational": 2, 00:26:29.172 "base_bdevs_list": [ 00:26:29.172 { 00:26:29.172 "name": null, 00:26:29.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:29.172 "is_configured": false, 00:26:29.172 "data_offset": 2048, 00:26:29.172 "data_size": 63488 00:26:29.172 }, 00:26:29.172 { 00:26:29.172 "name": "pt2", 00:26:29.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:29.172 "is_configured": true, 00:26:29.172 "data_offset": 2048, 00:26:29.172 "data_size": 63488 00:26:29.172 }, 00:26:29.172 { 00:26:29.172 "name": null, 00:26:29.172 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:29.172 "is_configured": false, 00:26:29.172 "data_offset": 2048, 00:26:29.172 "data_size": 63488 00:26:29.172 } 00:26:29.172 ] 00:26:29.172 }' 00:26:29.172 11:37:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.172 11:37:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.737 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:26:29.737 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:29.737 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:26:29.737 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:29.995 [2024-07-25 11:37:45.770154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:29.995 [2024-07-25 11:37:45.770241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.996 [2024-07-25 11:37:45.770280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:29.996 [2024-07-25 11:37:45.770296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.996 [2024-07-25 11:37:45.770872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.996 [2024-07-25 11:37:45.770899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:29.996 [2024-07-25 11:37:45.771006] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:29.996 [2024-07-25 11:37:45.771064] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:29.996 [2024-07-25 11:37:45.771237] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:29.996 [2024-07-25 11:37:45.771253] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:29.996 [2024-07-25 11:37:45.771560] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:29.996 pt3 00:26:29.996 [2024-07-25 11:37:45.776413] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:29.996 [2024-07-25 11:37:45.776444] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:29.996 [2024-07-25 11:37:45.776846] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.996 11:37:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.254 11:37:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.254 "name": "raid_bdev1", 00:26:30.254 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:30.254 "strip_size_kb": 64, 00:26:30.254 "state": "online", 00:26:30.254 "raid_level": "raid5f", 00:26:30.254 "superblock": true, 00:26:30.254 "num_base_bdevs": 3, 00:26:30.254 "num_base_bdevs_discovered": 2, 00:26:30.254 "num_base_bdevs_operational": 2, 00:26:30.254 "base_bdevs_list": [ 00:26:30.254 { 00:26:30.254 "name": null, 00:26:30.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.254 "is_configured": false, 00:26:30.254 "data_offset": 2048, 00:26:30.254 "data_size": 63488 00:26:30.254 }, 00:26:30.254 { 00:26:30.254 "name": "pt2", 00:26:30.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:30.254 "is_configured": true, 00:26:30.254 "data_offset": 2048, 00:26:30.254 "data_size": 63488 00:26:30.254 }, 00:26:30.254 { 00:26:30.254 "name": "pt3", 00:26:30.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:30.254 "is_configured": true, 00:26:30.254 "data_offset": 2048, 00:26:30.254 "data_size": 63488 00:26:30.254 } 00:26:30.254 ] 00:26:30.254 }' 00:26:30.254 11:37:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.254 11:37:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.189 11:37:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:31.189 [2024-07-25 11:37:46.938804] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:31.189 [2024-07-25 11:37:46.938850] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:31.189 [2024-07-25 11:37:46.938954] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:31.189 [2024-07-25 11:37:46.939035] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:31.189 [2024-07-25 11:37:46.939056] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:31.189 11:37:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.189 11:37:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:26:31.445 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:26:31.445 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:26:31.445 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:26:31.445 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:26:31.445 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:31.702 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:31.959 [2024-07-25 11:37:47.745971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:31.959 [2024-07-25 11:37:47.746086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.959 [2024-07-25 11:37:47.746116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:31.959 [2024-07-25 11:37:47.746135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.959 [2024-07-25 11:37:47.748922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.959 [2024-07-25 11:37:47.748979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:31.959 [2024-07-25 11:37:47.749090] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:31.959 [2024-07-25 11:37:47.749157] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:31.959 [2024-07-25 11:37:47.749341] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:31.959 [2024-07-25 11:37:47.749368] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:31.959 [2024-07-25 11:37:47.749394] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:26:31.959 [2024-07-25 11:37:47.749460] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:31.959 pt1 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.959 11:37:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.217 11:37:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:32.217 "name": "raid_bdev1", 00:26:32.217 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:32.217 "strip_size_kb": 64, 00:26:32.217 "state": "configuring", 00:26:32.217 "raid_level": "raid5f", 00:26:32.217 "superblock": true, 00:26:32.217 "num_base_bdevs": 3, 00:26:32.217 "num_base_bdevs_discovered": 1, 00:26:32.217 "num_base_bdevs_operational": 2, 00:26:32.217 "base_bdevs_list": [ 00:26:32.217 { 00:26:32.217 "name": null, 00:26:32.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.217 "is_configured": false, 00:26:32.217 "data_offset": 2048, 00:26:32.217 "data_size": 63488 00:26:32.217 }, 00:26:32.217 { 00:26:32.217 "name": "pt2", 00:26:32.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:32.217 "is_configured": true, 00:26:32.217 "data_offset": 2048, 00:26:32.217 "data_size": 63488 00:26:32.217 }, 00:26:32.217 { 00:26:32.217 "name": null, 00:26:32.217 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:32.217 "is_configured": false, 00:26:32.217 "data_offset": 2048, 00:26:32.217 "data_size": 63488 00:26:32.217 } 00:26:32.217 ] 00:26:32.217 }' 00:26:32.217 11:37:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:32.217 11:37:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.782 11:37:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:26:32.782 11:37:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:33.350 11:37:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:26:33.350 11:37:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:33.350 [2024-07-25 11:37:49.185211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:33.350 [2024-07-25 11:37:49.185545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.350 [2024-07-25 11:37:49.185636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:33.350 [2024-07-25 11:37:49.185860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.350 [2024-07-25 11:37:49.186467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.350 [2024-07-25 11:37:49.186498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:33.350 [2024-07-25 11:37:49.186600] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:33.351 [2024-07-25 11:37:49.186666] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:33.351 [2024-07-25 11:37:49.186830] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:26:33.351 [2024-07-25 11:37:49.186855] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:33.351 [2024-07-25 11:37:49.187192] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:33.351 pt3 00:26:33.351 [2024-07-25 11:37:49.192082] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:26:33.351 [2024-07-25 11:37:49.192110] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:26:33.351 [2024-07-25 11:37:49.192435] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.351 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.609 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:33.609 "name": "raid_bdev1", 00:26:33.609 "uuid": "119271d6-ab84-4624-9b33-1b7d7d87a157", 00:26:33.609 "strip_size_kb": 64, 00:26:33.609 "state": "online", 00:26:33.609 "raid_level": "raid5f", 00:26:33.609 "superblock": true, 00:26:33.609 "num_base_bdevs": 3, 00:26:33.609 "num_base_bdevs_discovered": 2, 00:26:33.609 "num_base_bdevs_operational": 2, 00:26:33.609 "base_bdevs_list": [ 00:26:33.609 { 00:26:33.609 "name": null, 00:26:33.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.609 "is_configured": false, 00:26:33.609 "data_offset": 2048, 00:26:33.609 "data_size": 63488 00:26:33.609 }, 00:26:33.609 { 00:26:33.609 "name": "pt2", 00:26:33.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:33.609 "is_configured": true, 00:26:33.609 "data_offset": 2048, 00:26:33.609 "data_size": 63488 00:26:33.609 }, 00:26:33.609 { 00:26:33.609 "name": "pt3", 00:26:33.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:33.609 "is_configured": true, 00:26:33.609 "data_offset": 2048, 00:26:33.609 "data_size": 63488 00:26:33.609 } 00:26:33.609 ] 00:26:33.609 }' 00:26:33.609 11:37:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:33.609 11:37:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.546 11:37:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:34.546 11:37:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:34.546 11:37:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:26:34.546 11:37:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:34.546 11:37:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:26:34.809 [2024-07-25 11:37:50.542714] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 119271d6-ab84-4624-9b33-1b7d7d87a157 '!=' 119271d6-ab84-4624-9b33-1b7d7d87a157 ']' 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 93309 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 93309 ']' 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 93309 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93309 00:26:34.810 killing process with pid 93309 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93309' 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 93309 00:26:34.810 [2024-07-25 11:37:50.584333] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:34.810 11:37:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 93309 00:26:34.810 [2024-07-25 11:37:50.584615] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:34.810 [2024-07-25 11:37:50.584734] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:34.810 [2024-07-25 11:37:50.584751] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:26:35.076 [2024-07-25 11:37:50.854217] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:36.489 11:37:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:26:36.489 00:26:36.489 real 0m24.176s 00:26:36.489 user 0m44.058s 00:26:36.489 sys 0m3.124s 00:26:36.489 ************************************ 00:26:36.489 END TEST raid5f_superblock_test 00:26:36.489 ************************************ 00:26:36.489 11:37:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:36.489 11:37:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.489 11:37:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # '[' true = true ']' 00:26:36.489 11:37:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:26:36.489 11:37:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:36.489 11:37:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:36.489 11:37:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:36.489 ************************************ 00:26:36.489 START TEST raid5f_rebuild_test 00:26:36.489 ************************************ 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=3 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=94024 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 94024 /var/tmp/spdk-raid.sock 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 94024 ']' 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:36.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.489 11:37:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.489 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:36.489 Zero copy mechanism will not be used. 00:26:36.489 [2024-07-25 11:37:52.184401] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:26:36.489 [2024-07-25 11:37:52.184592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94024 ] 00:26:36.489 [2024-07-25 11:37:52.359240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.747 [2024-07-25 11:37:52.619080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.006 [2024-07-25 11:37:52.819287] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:37.006 [2024-07-25 11:37:52.819347] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:37.264 11:37:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:37.264 11:37:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:26:37.264 11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:37.264 11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:37.523 BaseBdev1_malloc 00:26:37.523 11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:37.788 [2024-07-25 11:37:53.610027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:37.788 [2024-07-25 11:37:53.610122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.788 [2024-07-25 11:37:53.610163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:37.788 [2024-07-25 11:37:53.610180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.788 [2024-07-25 11:37:53.613001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.788 [2024-07-25 11:37:53.613049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:37.788 BaseBdev1 00:26:37.788 11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:37.788 11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:38.060 BaseBdev2_malloc 00:26:38.060 11:37:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:38.318 [2024-07-25 11:37:54.121038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:38.318 [2024-07-25 11:37:54.121129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.318 [2024-07-25 11:37:54.121170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:38.318 [2024-07-25 11:37:54.121186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.318 [2024-07-25 11:37:54.123905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.318 [2024-07-25 11:37:54.123950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:38.318 BaseBdev2 00:26:38.318 11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:38.318 11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:38.575 BaseBdev3_malloc 00:26:38.575 11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:38.833 [2024-07-25 11:37:54.644874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:38.833 [2024-07-25 11:37:54.644964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.833 [2024-07-25 11:37:54.645005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:38.833 [2024-07-25 11:37:54.645023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.833 [2024-07-25 11:37:54.647780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.833 [2024-07-25 11:37:54.647823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:38.833 BaseBdev3 00:26:38.833 11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:39.092 spare_malloc 00:26:39.092 11:37:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:39.350 spare_delay 00:26:39.350 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:39.608 [2024-07-25 11:37:55.372539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:39.608 [2024-07-25 11:37:55.372667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.608 [2024-07-25 11:37:55.372726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:39.608 [2024-07-25 11:37:55.372749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.608 [2024-07-25 11:37:55.375583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.608 [2024-07-25 11:37:55.375640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:39.608 spare 00:26:39.608 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:26:39.867 [2024-07-25 11:37:55.640725] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:39.867 [2024-07-25 11:37:55.643096] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:39.867 [2024-07-25 11:37:55.643196] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:39.867 [2024-07-25 11:37:55.643334] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:39.867 [2024-07-25 11:37:55.643353] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:39.867 [2024-07-25 11:37:55.643790] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:39.867 [2024-07-25 11:37:55.649066] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:39.867 [2024-07-25 11:37:55.649093] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:39.867 [2024-07-25 11:37:55.649366] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.867 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.125 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.125 "name": "raid_bdev1", 00:26:40.125 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:40.125 "strip_size_kb": 64, 00:26:40.125 "state": "online", 00:26:40.125 "raid_level": "raid5f", 00:26:40.125 "superblock": false, 00:26:40.125 "num_base_bdevs": 3, 00:26:40.125 "num_base_bdevs_discovered": 3, 00:26:40.125 "num_base_bdevs_operational": 3, 00:26:40.125 "base_bdevs_list": [ 00:26:40.125 { 00:26:40.125 "name": "BaseBdev1", 00:26:40.125 "uuid": "d13cbb0f-ae18-55fd-8013-4ebd949f76c5", 00:26:40.125 "is_configured": true, 00:26:40.125 "data_offset": 0, 00:26:40.125 "data_size": 65536 00:26:40.125 }, 00:26:40.125 { 00:26:40.125 "name": "BaseBdev2", 00:26:40.125 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:40.125 "is_configured": true, 00:26:40.125 "data_offset": 0, 00:26:40.125 "data_size": 65536 00:26:40.125 }, 00:26:40.125 { 00:26:40.125 "name": "BaseBdev3", 00:26:40.125 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:40.125 "is_configured": true, 00:26:40.125 "data_offset": 0, 00:26:40.125 "data_size": 65536 00:26:40.125 } 00:26:40.125 ] 00:26:40.125 }' 00:26:40.125 11:37:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.125 11:37:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.773 11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:40.773 11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:26:41.036 [2024-07-25 11:37:56.755725] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:41.036 11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=131072 00:26:41.036 11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:41.036 11:37:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:41.294 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:41.552 [2024-07-25 11:37:57.243704] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:41.552 /dev/nbd0 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:41.552 1+0 records in 00:26:41.552 1+0 records out 00:26:41.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428771 s, 9.6 MB/s 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # write_unit_size=256 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # echo 128 00:26:41.552 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:26:42.118 512+0 records in 00:26:42.118 512+0 records out 00:26:42.118 67108864 bytes (67 MB, 64 MiB) copied, 0.43312 s, 155 MB/s 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:42.118 [2024-07-25 11:37:57.982158] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:42.118 11:37:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:42.377 [2024-07-25 11:37:58.215947] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.377 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.943 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:42.943 "name": "raid_bdev1", 00:26:42.943 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:42.943 "strip_size_kb": 64, 00:26:42.943 "state": "online", 00:26:42.943 "raid_level": "raid5f", 00:26:42.943 "superblock": false, 00:26:42.943 "num_base_bdevs": 3, 00:26:42.943 "num_base_bdevs_discovered": 2, 00:26:42.943 "num_base_bdevs_operational": 2, 00:26:42.943 "base_bdevs_list": [ 00:26:42.943 { 00:26:42.943 "name": null, 00:26:42.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.943 "is_configured": false, 00:26:42.943 "data_offset": 0, 00:26:42.943 "data_size": 65536 00:26:42.943 }, 00:26:42.943 { 00:26:42.943 "name": "BaseBdev2", 00:26:42.943 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:42.943 "is_configured": true, 00:26:42.943 "data_offset": 0, 00:26:42.943 "data_size": 65536 00:26:42.943 }, 00:26:42.943 { 00:26:42.943 "name": "BaseBdev3", 00:26:42.943 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:42.943 "is_configured": true, 00:26:42.943 "data_offset": 0, 00:26:42.943 "data_size": 65536 00:26:42.943 } 00:26:42.943 ] 00:26:42.943 }' 00:26:42.943 11:37:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:42.943 11:37:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.509 11:37:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:43.767 [2024-07-25 11:37:59.420264] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:43.767 [2024-07-25 11:37:59.434343] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:26:43.767 [2024-07-25 11:37:59.441928] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:43.767 11:37:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:44.709 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:44.709 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:44.709 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:44.709 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:44.709 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:44.709 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.709 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.966 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:44.966 "name": "raid_bdev1", 00:26:44.966 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:44.966 "strip_size_kb": 64, 00:26:44.966 "state": "online", 00:26:44.966 "raid_level": "raid5f", 00:26:44.966 "superblock": false, 00:26:44.966 "num_base_bdevs": 3, 00:26:44.966 "num_base_bdevs_discovered": 3, 00:26:44.966 "num_base_bdevs_operational": 3, 00:26:44.966 "process": { 00:26:44.966 "type": "rebuild", 00:26:44.966 "target": "spare", 00:26:44.966 "progress": { 00:26:44.966 "blocks": 24576, 00:26:44.966 "percent": 18 00:26:44.966 } 00:26:44.966 }, 00:26:44.966 "base_bdevs_list": [ 00:26:44.966 { 00:26:44.966 "name": "spare", 00:26:44.966 "uuid": "dfc3e38c-0642-531a-98f9-2065e1b09fd5", 00:26:44.966 "is_configured": true, 00:26:44.966 "data_offset": 0, 00:26:44.966 "data_size": 65536 00:26:44.966 }, 00:26:44.966 { 00:26:44.966 "name": "BaseBdev2", 00:26:44.966 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:44.966 "is_configured": true, 00:26:44.966 "data_offset": 0, 00:26:44.966 "data_size": 65536 00:26:44.966 }, 00:26:44.966 { 00:26:44.966 "name": "BaseBdev3", 00:26:44.966 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:44.966 "is_configured": true, 00:26:44.966 "data_offset": 0, 00:26:44.966 "data_size": 65536 00:26:44.966 } 00:26:44.966 ] 00:26:44.966 }' 00:26:44.966 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:44.966 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:44.966 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:44.966 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:44.966 11:38:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:45.223 [2024-07-25 11:38:01.020589] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:45.223 [2024-07-25 11:38:01.060926] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:45.223 [2024-07-25 11:38:01.061030] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:45.223 [2024-07-25 11:38:01.061056] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:45.223 [2024-07-25 11:38:01.061071] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.481 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.739 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:45.739 "name": "raid_bdev1", 00:26:45.739 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:45.739 "strip_size_kb": 64, 00:26:45.739 "state": "online", 00:26:45.739 "raid_level": "raid5f", 00:26:45.739 "superblock": false, 00:26:45.739 "num_base_bdevs": 3, 00:26:45.739 "num_base_bdevs_discovered": 2, 00:26:45.739 "num_base_bdevs_operational": 2, 00:26:45.739 "base_bdevs_list": [ 00:26:45.739 { 00:26:45.739 "name": null, 00:26:45.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:45.739 "is_configured": false, 00:26:45.739 "data_offset": 0, 00:26:45.739 "data_size": 65536 00:26:45.739 }, 00:26:45.739 { 00:26:45.739 "name": "BaseBdev2", 00:26:45.739 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:45.739 "is_configured": true, 00:26:45.739 "data_offset": 0, 00:26:45.739 "data_size": 65536 00:26:45.739 }, 00:26:45.739 { 00:26:45.739 "name": "BaseBdev3", 00:26:45.739 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:45.739 "is_configured": true, 00:26:45.739 "data_offset": 0, 00:26:45.739 "data_size": 65536 00:26:45.739 } 00:26:45.739 ] 00:26:45.739 }' 00:26:45.739 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:45.739 11:38:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.305 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:46.305 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:46.305 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:46.305 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:46.305 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:46.305 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.305 11:38:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.562 11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:46.562 "name": "raid_bdev1", 00:26:46.562 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:46.562 "strip_size_kb": 64, 00:26:46.562 "state": "online", 00:26:46.562 "raid_level": "raid5f", 00:26:46.562 "superblock": false, 00:26:46.562 "num_base_bdevs": 3, 00:26:46.562 "num_base_bdevs_discovered": 2, 00:26:46.562 "num_base_bdevs_operational": 2, 00:26:46.562 "base_bdevs_list": [ 00:26:46.562 { 00:26:46.562 "name": null, 00:26:46.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.562 "is_configured": false, 00:26:46.562 "data_offset": 0, 00:26:46.562 "data_size": 65536 00:26:46.562 }, 00:26:46.562 { 00:26:46.562 "name": "BaseBdev2", 00:26:46.562 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:46.562 "is_configured": true, 00:26:46.562 "data_offset": 0, 00:26:46.562 "data_size": 65536 00:26:46.562 }, 00:26:46.562 { 00:26:46.562 "name": "BaseBdev3", 00:26:46.562 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:46.562 "is_configured": true, 00:26:46.562 "data_offset": 0, 00:26:46.563 "data_size": 65536 00:26:46.563 } 00:26:46.563 ] 00:26:46.563 }' 00:26:46.563 11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:46.563 11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:46.563 11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:46.563 11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:46.563 11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:46.820 [2024-07-25 11:38:02.614024] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:46.820 [2024-07-25 11:38:02.627337] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:26:46.820 [2024-07-25 11:38:02.634812] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:46.820 11:38:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:48.193 "name": "raid_bdev1", 00:26:48.193 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:48.193 "strip_size_kb": 64, 00:26:48.193 "state": "online", 00:26:48.193 "raid_level": "raid5f", 00:26:48.193 "superblock": false, 00:26:48.193 "num_base_bdevs": 3, 00:26:48.193 "num_base_bdevs_discovered": 3, 00:26:48.193 "num_base_bdevs_operational": 3, 00:26:48.193 "process": { 00:26:48.193 "type": "rebuild", 00:26:48.193 "target": "spare", 00:26:48.193 "progress": { 00:26:48.193 "blocks": 24576, 00:26:48.193 "percent": 18 00:26:48.193 } 00:26:48.193 }, 00:26:48.193 "base_bdevs_list": [ 00:26:48.193 { 00:26:48.193 "name": "spare", 00:26:48.193 "uuid": "dfc3e38c-0642-531a-98f9-2065e1b09fd5", 00:26:48.193 "is_configured": true, 00:26:48.193 "data_offset": 0, 00:26:48.193 "data_size": 65536 00:26:48.193 }, 00:26:48.193 { 00:26:48.193 "name": "BaseBdev2", 00:26:48.193 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:48.193 "is_configured": true, 00:26:48.193 "data_offset": 0, 00:26:48.193 "data_size": 65536 00:26:48.193 }, 00:26:48.193 { 00:26:48.193 "name": "BaseBdev3", 00:26:48.193 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:48.193 "is_configured": true, 00:26:48.193 "data_offset": 0, 00:26:48.193 "data_size": 65536 00:26:48.193 } 00:26:48.193 ] 00:26:48.193 }' 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.193 11:38:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=3 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=1248 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.193 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.451 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:48.451 "name": "raid_bdev1", 00:26:48.451 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:48.451 "strip_size_kb": 64, 00:26:48.451 "state": "online", 00:26:48.451 "raid_level": "raid5f", 00:26:48.451 "superblock": false, 00:26:48.451 "num_base_bdevs": 3, 00:26:48.451 "num_base_bdevs_discovered": 3, 00:26:48.451 "num_base_bdevs_operational": 3, 00:26:48.451 "process": { 00:26:48.451 "type": "rebuild", 00:26:48.451 "target": "spare", 00:26:48.451 "progress": { 00:26:48.451 "blocks": 32768, 00:26:48.451 "percent": 25 00:26:48.451 } 00:26:48.451 }, 00:26:48.451 "base_bdevs_list": [ 00:26:48.451 { 00:26:48.451 "name": "spare", 00:26:48.451 "uuid": "dfc3e38c-0642-531a-98f9-2065e1b09fd5", 00:26:48.451 "is_configured": true, 00:26:48.451 "data_offset": 0, 00:26:48.451 "data_size": 65536 00:26:48.451 }, 00:26:48.451 { 00:26:48.451 "name": "BaseBdev2", 00:26:48.451 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:48.451 "is_configured": true, 00:26:48.451 "data_offset": 0, 00:26:48.451 "data_size": 65536 00:26:48.451 }, 00:26:48.451 { 00:26:48.451 "name": "BaseBdev3", 00:26:48.451 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:48.451 "is_configured": true, 00:26:48.451 "data_offset": 0, 00:26:48.451 "data_size": 65536 00:26:48.451 } 00:26:48.451 ] 00:26:48.451 }' 00:26:48.451 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:48.709 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.709 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:48.709 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.709 11:38:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:49.643 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:49.643 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.643 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:49.643 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:49.643 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:49.643 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:49.643 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.643 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.902 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:49.902 "name": "raid_bdev1", 00:26:49.902 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:49.902 "strip_size_kb": 64, 00:26:49.902 "state": "online", 00:26:49.902 "raid_level": "raid5f", 00:26:49.902 "superblock": false, 00:26:49.902 "num_base_bdevs": 3, 00:26:49.902 "num_base_bdevs_discovered": 3, 00:26:49.902 "num_base_bdevs_operational": 3, 00:26:49.902 "process": { 00:26:49.902 "type": "rebuild", 00:26:49.902 "target": "spare", 00:26:49.902 "progress": { 00:26:49.902 "blocks": 59392, 00:26:49.902 "percent": 45 00:26:49.902 } 00:26:49.902 }, 00:26:49.902 "base_bdevs_list": [ 00:26:49.902 { 00:26:49.902 "name": "spare", 00:26:49.902 "uuid": "dfc3e38c-0642-531a-98f9-2065e1b09fd5", 00:26:49.902 "is_configured": true, 00:26:49.902 "data_offset": 0, 00:26:49.902 "data_size": 65536 00:26:49.902 }, 00:26:49.902 { 00:26:49.902 "name": "BaseBdev2", 00:26:49.902 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:49.902 "is_configured": true, 00:26:49.902 "data_offset": 0, 00:26:49.902 "data_size": 65536 00:26:49.902 }, 00:26:49.902 { 00:26:49.902 "name": "BaseBdev3", 00:26:49.902 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:49.902 "is_configured": true, 00:26:49.902 "data_offset": 0, 00:26:49.902 "data_size": 65536 00:26:49.902 } 00:26:49.902 ] 00:26:49.902 }' 00:26:49.902 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:49.902 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.902 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:49.902 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.902 11:38:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:51.278 11:38:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:51.278 11:38:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:51.278 11:38:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:51.278 11:38:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:51.278 11:38:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:51.278 11:38:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:51.278 11:38:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.278 11:38:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.278 11:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:51.278 "name": "raid_bdev1", 00:26:51.278 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:51.278 "strip_size_kb": 64, 00:26:51.278 "state": "online", 00:26:51.278 "raid_level": "raid5f", 00:26:51.278 "superblock": false, 00:26:51.278 "num_base_bdevs": 3, 00:26:51.278 "num_base_bdevs_discovered": 3, 00:26:51.278 "num_base_bdevs_operational": 3, 00:26:51.278 "process": { 00:26:51.278 "type": "rebuild", 00:26:51.278 "target": "spare", 00:26:51.278 "progress": { 00:26:51.278 "blocks": 86016, 00:26:51.278 "percent": 65 00:26:51.278 } 00:26:51.278 }, 00:26:51.278 "base_bdevs_list": [ 00:26:51.278 { 00:26:51.278 "name": "spare", 00:26:51.278 "uuid": "dfc3e38c-0642-531a-98f9-2065e1b09fd5", 00:26:51.278 "is_configured": true, 00:26:51.278 "data_offset": 0, 00:26:51.278 "data_size": 65536 00:26:51.278 }, 00:26:51.278 { 00:26:51.278 "name": "BaseBdev2", 00:26:51.278 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:51.278 "is_configured": true, 00:26:51.278 "data_offset": 0, 00:26:51.278 "data_size": 65536 00:26:51.278 }, 00:26:51.278 { 00:26:51.278 "name": "BaseBdev3", 00:26:51.278 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:51.278 "is_configured": true, 00:26:51.278 "data_offset": 0, 00:26:51.278 "data_size": 65536 00:26:51.278 } 00:26:51.278 ] 00:26:51.278 }' 00:26:51.278 11:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:51.278 11:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:51.278 11:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:51.278 11:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:51.278 11:38:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.652 "name": "raid_bdev1", 00:26:52.652 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:52.652 "strip_size_kb": 64, 00:26:52.652 "state": "online", 00:26:52.652 "raid_level": "raid5f", 00:26:52.652 "superblock": false, 00:26:52.652 "num_base_bdevs": 3, 00:26:52.652 "num_base_bdevs_discovered": 3, 00:26:52.652 "num_base_bdevs_operational": 3, 00:26:52.652 "process": { 00:26:52.652 "type": "rebuild", 00:26:52.652 "target": "spare", 00:26:52.652 "progress": { 00:26:52.652 "blocks": 116736, 00:26:52.652 "percent": 89 00:26:52.652 } 00:26:52.652 }, 00:26:52.652 "base_bdevs_list": [ 00:26:52.652 { 00:26:52.652 "name": "spare", 00:26:52.652 "uuid": "dfc3e38c-0642-531a-98f9-2065e1b09fd5", 00:26:52.652 "is_configured": true, 00:26:52.652 "data_offset": 0, 00:26:52.652 "data_size": 65536 00:26:52.652 }, 00:26:52.652 { 00:26:52.652 "name": "BaseBdev2", 00:26:52.652 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:52.652 "is_configured": true, 00:26:52.652 "data_offset": 0, 00:26:52.652 "data_size": 65536 00:26:52.652 }, 00:26:52.652 { 00:26:52.652 "name": "BaseBdev3", 00:26:52.652 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:52.652 "is_configured": true, 00:26:52.652 "data_offset": 0, 00:26:52.652 "data_size": 65536 00:26:52.652 } 00:26:52.652 ] 00:26:52.652 }' 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:52.652 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:52.910 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:52.910 11:38:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:53.476 [2024-07-25 11:38:09.114682] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:53.476 [2024-07-25 11:38:09.114795] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:53.476 [2024-07-25 11:38:09.114873] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:53.733 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:53.733 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:53.733 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:53.733 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:53.733 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:53.733 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:53.733 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.733 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.353 "name": "raid_bdev1", 00:26:54.353 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:54.353 "strip_size_kb": 64, 00:26:54.353 "state": "online", 00:26:54.353 "raid_level": "raid5f", 00:26:54.353 "superblock": false, 00:26:54.353 "num_base_bdevs": 3, 00:26:54.353 "num_base_bdevs_discovered": 3, 00:26:54.353 "num_base_bdevs_operational": 3, 00:26:54.353 "base_bdevs_list": [ 00:26:54.353 { 00:26:54.353 "name": "spare", 00:26:54.353 "uuid": "dfc3e38c-0642-531a-98f9-2065e1b09fd5", 00:26:54.353 "is_configured": true, 00:26:54.353 "data_offset": 0, 00:26:54.353 "data_size": 65536 00:26:54.353 }, 00:26:54.353 { 00:26:54.353 "name": "BaseBdev2", 00:26:54.353 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:54.353 "is_configured": true, 00:26:54.353 "data_offset": 0, 00:26:54.353 "data_size": 65536 00:26:54.353 }, 00:26:54.353 { 00:26:54.353 "name": "BaseBdev3", 00:26:54.353 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:54.353 "is_configured": true, 00:26:54.353 "data_offset": 0, 00:26:54.353 "data_size": 65536 00:26:54.353 } 00:26:54.353 ] 00:26:54.353 }' 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.353 11:38:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.611 "name": "raid_bdev1", 00:26:54.611 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:54.611 "strip_size_kb": 64, 00:26:54.611 "state": "online", 00:26:54.611 "raid_level": "raid5f", 00:26:54.611 "superblock": false, 00:26:54.611 "num_base_bdevs": 3, 00:26:54.611 "num_base_bdevs_discovered": 3, 00:26:54.611 "num_base_bdevs_operational": 3, 00:26:54.611 "base_bdevs_list": [ 00:26:54.611 { 00:26:54.611 "name": "spare", 00:26:54.611 "uuid": "dfc3e38c-0642-531a-98f9-2065e1b09fd5", 00:26:54.611 "is_configured": true, 00:26:54.611 "data_offset": 0, 00:26:54.611 "data_size": 65536 00:26:54.611 }, 00:26:54.611 { 00:26:54.611 "name": "BaseBdev2", 00:26:54.611 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:54.611 "is_configured": true, 00:26:54.611 "data_offset": 0, 00:26:54.611 "data_size": 65536 00:26:54.611 }, 00:26:54.611 { 00:26:54.611 "name": "BaseBdev3", 00:26:54.611 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:54.611 "is_configured": true, 00:26:54.611 "data_offset": 0, 00:26:54.611 "data_size": 65536 00:26:54.611 } 00:26:54.611 ] 00:26:54.611 }' 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.611 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.869 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:54.869 "name": "raid_bdev1", 00:26:54.869 "uuid": "ceb201be-27ed-45a6-b3af-a480c62803a5", 00:26:54.869 "strip_size_kb": 64, 00:26:54.869 "state": "online", 00:26:54.869 "raid_level": "raid5f", 00:26:54.869 "superblock": false, 00:26:54.869 "num_base_bdevs": 3, 00:26:54.869 "num_base_bdevs_discovered": 3, 00:26:54.869 "num_base_bdevs_operational": 3, 00:26:54.869 "base_bdevs_list": [ 00:26:54.869 { 00:26:54.869 "name": "spare", 00:26:54.869 "uuid": "dfc3e38c-0642-531a-98f9-2065e1b09fd5", 00:26:54.869 "is_configured": true, 00:26:54.869 "data_offset": 0, 00:26:54.869 "data_size": 65536 00:26:54.869 }, 00:26:54.869 { 00:26:54.869 "name": "BaseBdev2", 00:26:54.869 "uuid": "a5368f54-fc87-5ba6-a29a-c32616399fdd", 00:26:54.869 "is_configured": true, 00:26:54.869 "data_offset": 0, 00:26:54.869 "data_size": 65536 00:26:54.869 }, 00:26:54.869 { 00:26:54.869 "name": "BaseBdev3", 00:26:54.869 "uuid": "72c91655-019f-5441-9121-0257ae7d0089", 00:26:54.869 "is_configured": true, 00:26:54.869 "data_offset": 0, 00:26:54.869 "data_size": 65536 00:26:54.869 } 00:26:54.869 ] 00:26:54.869 }' 00:26:54.869 11:38:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:54.869 11:38:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.440 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:55.698 [2024-07-25 11:38:11.488677] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:55.698 [2024-07-25 11:38:11.488728] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:55.698 [2024-07-25 11:38:11.488827] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:55.698 [2024-07-25 11:38:11.488955] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:55.698 [2024-07-25 11:38:11.488972] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:55.698 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.698 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:26:55.956 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:26:55.956 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:26:55.956 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:26:55.956 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:55.956 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:55.956 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:55.956 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:55.957 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:55.957 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:55.957 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:55.957 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:55.957 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:55.957 11:38:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:56.214 /dev/nbd0 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:56.214 1+0 records in 00:26:56.214 1+0 records out 00:26:56.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285795 s, 14.3 MB/s 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.214 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:56.472 /dev/nbd1 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:56.472 1+0 records in 00:26:56.472 1+0 records out 00:26:56.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030678 s, 13.4 MB/s 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.472 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:56.730 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:56.730 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:56.730 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:56.730 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:56.730 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:56.730 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.730 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:56.988 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:56.988 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:56.988 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:56.988 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:56.988 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:56.988 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:56.988 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:56.988 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:56.988 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:56.988 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:57.247 11:38:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 94024 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 94024 ']' 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 94024 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94024 00:26:57.247 killing process with pid 94024 00:26:57.247 Received shutdown signal, test time was about 60.000000 seconds 00:26:57.247 00:26:57.247 Latency(us) 00:26:57.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.247 =================================================================================================================== 00:26:57.247 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94024' 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 94024 00:26:57.247 11:38:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 94024 00:26:57.247 [2024-07-25 11:38:13.027466] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:57.506 [2024-07-25 11:38:13.369530] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:26:58.903 00:26:58.903 real 0m22.437s 00:26:58.903 user 0m33.874s 00:26:58.903 sys 0m2.692s 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:58.903 ************************************ 00:26:58.903 END TEST raid5f_rebuild_test 00:26:58.903 ************************************ 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.903 11:38:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:26:58.903 11:38:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:58.903 11:38:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:58.903 11:38:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:58.903 ************************************ 00:26:58.903 START TEST raid5f_rebuild_test_sb 00:26:58.903 ************************************ 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=3 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:26:58.903 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:26:58.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=94530 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 94530 /var/tmp/spdk-raid.sock 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94530 ']' 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.904 11:38:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:58.904 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:58.904 Zero copy mechanism will not be used. 00:26:58.904 [2024-07-25 11:38:14.653045] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:26:58.904 [2024-07-25 11:38:14.653189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94530 ] 00:26:59.182 [2024-07-25 11:38:14.815414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.182 [2024-07-25 11:38:15.053328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.439 [2024-07-25 11:38:15.253130] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:59.439 [2024-07-25 11:38:15.253175] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:59.697 11:38:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.697 11:38:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:26:59.697 11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:59.697 11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:00.263 BaseBdev1_malloc 00:27:00.263 11:38:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:00.520 [2024-07-25 11:38:16.166782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:00.520 [2024-07-25 11:38:16.166875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.520 [2024-07-25 11:38:16.166915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:00.520 [2024-07-25 11:38:16.166932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.520 [2024-07-25 11:38:16.169719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.520 [2024-07-25 11:38:16.169764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:00.520 BaseBdev1 00:27:00.520 11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:00.520 11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:00.778 BaseBdev2_malloc 00:27:00.778 11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:01.036 [2024-07-25 11:38:16.722922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:01.036 [2024-07-25 11:38:16.723008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.036 [2024-07-25 11:38:16.723048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:01.036 [2024-07-25 11:38:16.723063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.036 [2024-07-25 11:38:16.725758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.036 [2024-07-25 11:38:16.725804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:01.036 BaseBdev2 00:27:01.036 11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:01.036 11:38:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:01.293 BaseBdev3_malloc 00:27:01.293 11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:01.551 [2024-07-25 11:38:17.266407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:01.551 [2024-07-25 11:38:17.266495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.551 [2024-07-25 11:38:17.266534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:01.551 [2024-07-25 11:38:17.266550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.551 [2024-07-25 11:38:17.269280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.551 [2024-07-25 11:38:17.269325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:01.551 BaseBdev3 00:27:01.551 11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:01.808 spare_malloc 00:27:01.808 11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:02.066 spare_delay 00:27:02.066 11:38:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:02.324 [2024-07-25 11:38:18.041700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:02.324 [2024-07-25 11:38:18.041775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.324 [2024-07-25 11:38:18.041812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:02.324 [2024-07-25 11:38:18.041828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.324 [2024-07-25 11:38:18.044581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.324 [2024-07-25 11:38:18.044641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:02.324 spare 00:27:02.324 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:27:02.582 [2024-07-25 11:38:18.285851] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:02.582 [2024-07-25 11:38:18.288206] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:02.582 [2024-07-25 11:38:18.288302] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:02.582 [2024-07-25 11:38:18.288570] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:02.582 [2024-07-25 11:38:18.288597] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:02.582 [2024-07-25 11:38:18.289004] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:02.582 [2024-07-25 11:38:18.294330] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:02.582 [2024-07-25 11:38:18.294470] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:02.582 [2024-07-25 11:38:18.294874] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.582 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.843 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.843 "name": "raid_bdev1", 00:27:02.843 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:02.843 "strip_size_kb": 64, 00:27:02.843 "state": "online", 00:27:02.843 "raid_level": "raid5f", 00:27:02.843 "superblock": true, 00:27:02.843 "num_base_bdevs": 3, 00:27:02.843 "num_base_bdevs_discovered": 3, 00:27:02.843 "num_base_bdevs_operational": 3, 00:27:02.843 "base_bdevs_list": [ 00:27:02.843 { 00:27:02.843 "name": "BaseBdev1", 00:27:02.843 "uuid": "9831a12a-b0f1-5efe-a909-413299715197", 00:27:02.843 "is_configured": true, 00:27:02.843 "data_offset": 2048, 00:27:02.843 "data_size": 63488 00:27:02.843 }, 00:27:02.843 { 00:27:02.843 "name": "BaseBdev2", 00:27:02.843 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:02.843 "is_configured": true, 00:27:02.843 "data_offset": 2048, 00:27:02.843 "data_size": 63488 00:27:02.843 }, 00:27:02.843 { 00:27:02.843 "name": "BaseBdev3", 00:27:02.843 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:02.843 "is_configured": true, 00:27:02.843 "data_offset": 2048, 00:27:02.843 "data_size": 63488 00:27:02.843 } 00:27:02.843 ] 00:27:02.843 }' 00:27:02.843 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.843 11:38:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.410 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:03.410 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:03.668 [2024-07-25 11:38:19.497062] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.668 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=126976 00:27:03.668 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.668 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:03.926 11:38:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:04.184 [2024-07-25 11:38:20.004996] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:04.185 /dev/nbd0 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:04.185 1+0 records in 00:27:04.185 1+0 records out 00:27:04.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556037 s, 7.4 MB/s 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # write_unit_size=256 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # echo 128 00:27:04.185 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:27:04.750 496+0 records in 00:27:04.750 496+0 records out 00:27:04.750 65011712 bytes (65 MB, 62 MiB) copied, 0.420608 s, 155 MB/s 00:27:04.750 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:04.750 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:04.750 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:04.750 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:04.750 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:04.750 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:04.750 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:05.008 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:05.008 [2024-07-25 11:38:20.715039] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:05.008 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:05.008 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:05.008 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:05.008 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:05.008 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:05.008 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:05.008 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:05.008 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:05.265 [2024-07-25 11:38:20.928802] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:05.265 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.266 11:38:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.524 11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:05.524 "name": "raid_bdev1", 00:27:05.524 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:05.524 "strip_size_kb": 64, 00:27:05.524 "state": "online", 00:27:05.524 "raid_level": "raid5f", 00:27:05.524 "superblock": true, 00:27:05.524 "num_base_bdevs": 3, 00:27:05.524 "num_base_bdevs_discovered": 2, 00:27:05.524 "num_base_bdevs_operational": 2, 00:27:05.524 "base_bdevs_list": [ 00:27:05.524 { 00:27:05.524 "name": null, 00:27:05.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.524 "is_configured": false, 00:27:05.524 "data_offset": 2048, 00:27:05.524 "data_size": 63488 00:27:05.524 }, 00:27:05.524 { 00:27:05.524 "name": "BaseBdev2", 00:27:05.524 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:05.524 "is_configured": true, 00:27:05.524 "data_offset": 2048, 00:27:05.524 "data_size": 63488 00:27:05.524 }, 00:27:05.524 { 00:27:05.524 "name": "BaseBdev3", 00:27:05.524 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:05.524 "is_configured": true, 00:27:05.524 "data_offset": 2048, 00:27:05.524 "data_size": 63488 00:27:05.524 } 00:27:05.524 ] 00:27:05.524 }' 00:27:05.524 11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:05.524 11:38:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.095 11:38:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:06.369 [2024-07-25 11:38:22.141100] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:06.369 [2024-07-25 11:38:22.155066] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:27:06.369 [2024-07-25 11:38:22.166180] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:06.369 11:38:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:07.302 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:07.302 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:07.302 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:07.302 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:07.302 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:07.302 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.302 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.867 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:07.867 "name": "raid_bdev1", 00:27:07.867 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:07.867 "strip_size_kb": 64, 00:27:07.867 "state": "online", 00:27:07.867 "raid_level": "raid5f", 00:27:07.867 "superblock": true, 00:27:07.867 "num_base_bdevs": 3, 00:27:07.867 "num_base_bdevs_discovered": 3, 00:27:07.867 "num_base_bdevs_operational": 3, 00:27:07.867 "process": { 00:27:07.867 "type": "rebuild", 00:27:07.867 "target": "spare", 00:27:07.867 "progress": { 00:27:07.867 "blocks": 24576, 00:27:07.867 "percent": 19 00:27:07.867 } 00:27:07.867 }, 00:27:07.867 "base_bdevs_list": [ 00:27:07.867 { 00:27:07.867 "name": "spare", 00:27:07.867 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:07.867 "is_configured": true, 00:27:07.867 "data_offset": 2048, 00:27:07.867 "data_size": 63488 00:27:07.867 }, 00:27:07.867 { 00:27:07.867 "name": "BaseBdev2", 00:27:07.867 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:07.867 "is_configured": true, 00:27:07.867 "data_offset": 2048, 00:27:07.867 "data_size": 63488 00:27:07.867 }, 00:27:07.867 { 00:27:07.867 "name": "BaseBdev3", 00:27:07.867 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:07.867 "is_configured": true, 00:27:07.867 "data_offset": 2048, 00:27:07.867 "data_size": 63488 00:27:07.867 } 00:27:07.867 ] 00:27:07.867 }' 00:27:07.867 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:07.867 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:07.867 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:07.867 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:07.867 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:08.125 [2024-07-25 11:38:23.864460] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:08.125 [2024-07-25 11:38:23.884742] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:08.125 [2024-07-25 11:38:23.884819] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.125 [2024-07-25 11:38:23.884843] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:08.125 [2024-07-25 11:38:23.884858] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.125 11:38:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.382 11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:08.382 "name": "raid_bdev1", 00:27:08.382 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:08.382 "strip_size_kb": 64, 00:27:08.382 "state": "online", 00:27:08.382 "raid_level": "raid5f", 00:27:08.382 "superblock": true, 00:27:08.382 "num_base_bdevs": 3, 00:27:08.382 "num_base_bdevs_discovered": 2, 00:27:08.382 "num_base_bdevs_operational": 2, 00:27:08.382 "base_bdevs_list": [ 00:27:08.382 { 00:27:08.382 "name": null, 00:27:08.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.382 "is_configured": false, 00:27:08.382 "data_offset": 2048, 00:27:08.382 "data_size": 63488 00:27:08.382 }, 00:27:08.382 { 00:27:08.382 "name": "BaseBdev2", 00:27:08.382 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:08.382 "is_configured": true, 00:27:08.382 "data_offset": 2048, 00:27:08.382 "data_size": 63488 00:27:08.382 }, 00:27:08.382 { 00:27:08.382 "name": "BaseBdev3", 00:27:08.382 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:08.382 "is_configured": true, 00:27:08.382 "data_offset": 2048, 00:27:08.382 "data_size": 63488 00:27:08.382 } 00:27:08.383 ] 00:27:08.383 }' 00:27:08.383 11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:08.383 11:38:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.315 11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:09.315 11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:09.315 11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:09.315 11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:09.315 11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:09.315 11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.315 11:38:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.315 11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:09.315 "name": "raid_bdev1", 00:27:09.315 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:09.315 "strip_size_kb": 64, 00:27:09.315 "state": "online", 00:27:09.315 "raid_level": "raid5f", 00:27:09.315 "superblock": true, 00:27:09.315 "num_base_bdevs": 3, 00:27:09.315 "num_base_bdevs_discovered": 2, 00:27:09.315 "num_base_bdevs_operational": 2, 00:27:09.315 "base_bdevs_list": [ 00:27:09.315 { 00:27:09.315 "name": null, 00:27:09.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.315 "is_configured": false, 00:27:09.315 "data_offset": 2048, 00:27:09.315 "data_size": 63488 00:27:09.315 }, 00:27:09.315 { 00:27:09.315 "name": "BaseBdev2", 00:27:09.315 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:09.315 "is_configured": true, 00:27:09.315 "data_offset": 2048, 00:27:09.315 "data_size": 63488 00:27:09.315 }, 00:27:09.315 { 00:27:09.315 "name": "BaseBdev3", 00:27:09.315 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:09.315 "is_configured": true, 00:27:09.315 "data_offset": 2048, 00:27:09.315 "data_size": 63488 00:27:09.315 } 00:27:09.315 ] 00:27:09.315 }' 00:27:09.573 11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:09.573 11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:09.573 11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:09.573 11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:09.573 11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:09.830 [2024-07-25 11:38:25.502137] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:09.830 [2024-07-25 11:38:25.515313] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:27:09.830 [2024-07-25 11:38:25.522488] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:09.830 11:38:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:10.838 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:10.838 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:10.838 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:10.838 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:10.838 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:10.838 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.838 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:11.096 "name": "raid_bdev1", 00:27:11.096 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:11.096 "strip_size_kb": 64, 00:27:11.096 "state": "online", 00:27:11.096 "raid_level": "raid5f", 00:27:11.096 "superblock": true, 00:27:11.096 "num_base_bdevs": 3, 00:27:11.096 "num_base_bdevs_discovered": 3, 00:27:11.096 "num_base_bdevs_operational": 3, 00:27:11.096 "process": { 00:27:11.096 "type": "rebuild", 00:27:11.096 "target": "spare", 00:27:11.096 "progress": { 00:27:11.096 "blocks": 24576, 00:27:11.096 "percent": 19 00:27:11.096 } 00:27:11.096 }, 00:27:11.096 "base_bdevs_list": [ 00:27:11.096 { 00:27:11.096 "name": "spare", 00:27:11.096 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:11.096 "is_configured": true, 00:27:11.096 "data_offset": 2048, 00:27:11.096 "data_size": 63488 00:27:11.096 }, 00:27:11.096 { 00:27:11.096 "name": "BaseBdev2", 00:27:11.096 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:11.096 "is_configured": true, 00:27:11.096 "data_offset": 2048, 00:27:11.096 "data_size": 63488 00:27:11.096 }, 00:27:11.096 { 00:27:11.096 "name": "BaseBdev3", 00:27:11.096 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:11.096 "is_configured": true, 00:27:11.096 "data_offset": 2048, 00:27:11.096 "data_size": 63488 00:27:11.096 } 00:27:11.096 ] 00:27:11.096 }' 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:27:11.096 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=3 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1270 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:11.096 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:11.097 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:11.097 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:11.097 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.097 11:38:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.355 11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:11.355 "name": "raid_bdev1", 00:27:11.355 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:11.355 "strip_size_kb": 64, 00:27:11.355 "state": "online", 00:27:11.355 "raid_level": "raid5f", 00:27:11.355 "superblock": true, 00:27:11.355 "num_base_bdevs": 3, 00:27:11.355 "num_base_bdevs_discovered": 3, 00:27:11.355 "num_base_bdevs_operational": 3, 00:27:11.355 "process": { 00:27:11.355 "type": "rebuild", 00:27:11.355 "target": "spare", 00:27:11.355 "progress": { 00:27:11.355 "blocks": 30720, 00:27:11.355 "percent": 24 00:27:11.355 } 00:27:11.355 }, 00:27:11.355 "base_bdevs_list": [ 00:27:11.355 { 00:27:11.355 "name": "spare", 00:27:11.355 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:11.355 "is_configured": true, 00:27:11.355 "data_offset": 2048, 00:27:11.355 "data_size": 63488 00:27:11.355 }, 00:27:11.355 { 00:27:11.355 "name": "BaseBdev2", 00:27:11.355 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:11.355 "is_configured": true, 00:27:11.355 "data_offset": 2048, 00:27:11.355 "data_size": 63488 00:27:11.355 }, 00:27:11.355 { 00:27:11.355 "name": "BaseBdev3", 00:27:11.355 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:11.355 "is_configured": true, 00:27:11.355 "data_offset": 2048, 00:27:11.355 "data_size": 63488 00:27:11.355 } 00:27:11.355 ] 00:27:11.355 }' 00:27:11.355 11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:11.355 11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:11.355 11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:11.355 11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:11.355 11:38:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:12.730 "name": "raid_bdev1", 00:27:12.730 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:12.730 "strip_size_kb": 64, 00:27:12.730 "state": "online", 00:27:12.730 "raid_level": "raid5f", 00:27:12.730 "superblock": true, 00:27:12.730 "num_base_bdevs": 3, 00:27:12.730 "num_base_bdevs_discovered": 3, 00:27:12.730 "num_base_bdevs_operational": 3, 00:27:12.730 "process": { 00:27:12.730 "type": "rebuild", 00:27:12.730 "target": "spare", 00:27:12.730 "progress": { 00:27:12.730 "blocks": 59392, 00:27:12.730 "percent": 46 00:27:12.730 } 00:27:12.730 }, 00:27:12.730 "base_bdevs_list": [ 00:27:12.730 { 00:27:12.730 "name": "spare", 00:27:12.730 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:12.730 "is_configured": true, 00:27:12.730 "data_offset": 2048, 00:27:12.730 "data_size": 63488 00:27:12.730 }, 00:27:12.730 { 00:27:12.730 "name": "BaseBdev2", 00:27:12.730 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:12.730 "is_configured": true, 00:27:12.730 "data_offset": 2048, 00:27:12.730 "data_size": 63488 00:27:12.730 }, 00:27:12.730 { 00:27:12.730 "name": "BaseBdev3", 00:27:12.730 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:12.730 "is_configured": true, 00:27:12.730 "data_offset": 2048, 00:27:12.730 "data_size": 63488 00:27:12.730 } 00:27:12.730 ] 00:27:12.730 }' 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:12.730 11:38:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:14.103 "name": "raid_bdev1", 00:27:14.103 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:14.103 "strip_size_kb": 64, 00:27:14.103 "state": "online", 00:27:14.103 "raid_level": "raid5f", 00:27:14.103 "superblock": true, 00:27:14.103 "num_base_bdevs": 3, 00:27:14.103 "num_base_bdevs_discovered": 3, 00:27:14.103 "num_base_bdevs_operational": 3, 00:27:14.103 "process": { 00:27:14.103 "type": "rebuild", 00:27:14.103 "target": "spare", 00:27:14.103 "progress": { 00:27:14.103 "blocks": 86016, 00:27:14.103 "percent": 67 00:27:14.103 } 00:27:14.103 }, 00:27:14.103 "base_bdevs_list": [ 00:27:14.103 { 00:27:14.103 "name": "spare", 00:27:14.103 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:14.103 "is_configured": true, 00:27:14.103 "data_offset": 2048, 00:27:14.103 "data_size": 63488 00:27:14.103 }, 00:27:14.103 { 00:27:14.103 "name": "BaseBdev2", 00:27:14.103 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:14.103 "is_configured": true, 00:27:14.103 "data_offset": 2048, 00:27:14.103 "data_size": 63488 00:27:14.103 }, 00:27:14.103 { 00:27:14.103 "name": "BaseBdev3", 00:27:14.103 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:14.103 "is_configured": true, 00:27:14.103 "data_offset": 2048, 00:27:14.103 "data_size": 63488 00:27:14.103 } 00:27:14.103 ] 00:27:14.103 }' 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:14.103 11:38:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:15.473 11:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:15.473 11:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:15.473 11:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:15.473 11:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:15.473 11:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:15.473 11:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:15.473 11:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.473 11:38:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.473 11:38:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:15.473 "name": "raid_bdev1", 00:27:15.473 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:15.473 "strip_size_kb": 64, 00:27:15.473 "state": "online", 00:27:15.473 "raid_level": "raid5f", 00:27:15.473 "superblock": true, 00:27:15.473 "num_base_bdevs": 3, 00:27:15.473 "num_base_bdevs_discovered": 3, 00:27:15.473 "num_base_bdevs_operational": 3, 00:27:15.473 "process": { 00:27:15.473 "type": "rebuild", 00:27:15.473 "target": "spare", 00:27:15.473 "progress": { 00:27:15.473 "blocks": 114688, 00:27:15.473 "percent": 90 00:27:15.473 } 00:27:15.473 }, 00:27:15.473 "base_bdevs_list": [ 00:27:15.473 { 00:27:15.473 "name": "spare", 00:27:15.473 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:15.473 "is_configured": true, 00:27:15.473 "data_offset": 2048, 00:27:15.473 "data_size": 63488 00:27:15.473 }, 00:27:15.473 { 00:27:15.473 "name": "BaseBdev2", 00:27:15.473 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:15.473 "is_configured": true, 00:27:15.473 "data_offset": 2048, 00:27:15.473 "data_size": 63488 00:27:15.473 }, 00:27:15.473 { 00:27:15.473 "name": "BaseBdev3", 00:27:15.473 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:15.473 "is_configured": true, 00:27:15.473 "data_offset": 2048, 00:27:15.473 "data_size": 63488 00:27:15.473 } 00:27:15.473 ] 00:27:15.473 }' 00:27:15.474 11:38:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:15.474 11:38:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:15.474 11:38:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:15.731 11:38:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:15.731 11:38:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:15.988 [2024-07-25 11:38:31.787736] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:15.988 [2024-07-25 11:38:31.788044] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:15.988 [2024-07-25 11:38:31.788204] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:16.553 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:16.553 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:16.553 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:16.553 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:16.553 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:16.553 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:16.553 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.553 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.811 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:16.811 "name": "raid_bdev1", 00:27:16.811 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:16.811 "strip_size_kb": 64, 00:27:16.811 "state": "online", 00:27:16.811 "raid_level": "raid5f", 00:27:16.811 "superblock": true, 00:27:16.811 "num_base_bdevs": 3, 00:27:16.811 "num_base_bdevs_discovered": 3, 00:27:16.811 "num_base_bdevs_operational": 3, 00:27:16.811 "base_bdevs_list": [ 00:27:16.811 { 00:27:16.811 "name": "spare", 00:27:16.811 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:16.811 "is_configured": true, 00:27:16.811 "data_offset": 2048, 00:27:16.811 "data_size": 63488 00:27:16.811 }, 00:27:16.811 { 00:27:16.811 "name": "BaseBdev2", 00:27:16.811 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:16.811 "is_configured": true, 00:27:16.811 "data_offset": 2048, 00:27:16.811 "data_size": 63488 00:27:16.811 }, 00:27:16.811 { 00:27:16.811 "name": "BaseBdev3", 00:27:16.811 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:16.811 "is_configured": true, 00:27:16.811 "data_offset": 2048, 00:27:16.811 "data_size": 63488 00:27:16.811 } 00:27:16.811 ] 00:27:16.811 }' 00:27:16.811 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:16.811 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:16.811 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:17.114 "name": "raid_bdev1", 00:27:17.114 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:17.114 "strip_size_kb": 64, 00:27:17.114 "state": "online", 00:27:17.114 "raid_level": "raid5f", 00:27:17.114 "superblock": true, 00:27:17.114 "num_base_bdevs": 3, 00:27:17.114 "num_base_bdevs_discovered": 3, 00:27:17.114 "num_base_bdevs_operational": 3, 00:27:17.114 "base_bdevs_list": [ 00:27:17.114 { 00:27:17.114 "name": "spare", 00:27:17.114 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:17.114 "is_configured": true, 00:27:17.114 "data_offset": 2048, 00:27:17.114 "data_size": 63488 00:27:17.114 }, 00:27:17.114 { 00:27:17.114 "name": "BaseBdev2", 00:27:17.114 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:17.114 "is_configured": true, 00:27:17.114 "data_offset": 2048, 00:27:17.114 "data_size": 63488 00:27:17.114 }, 00:27:17.114 { 00:27:17.114 "name": "BaseBdev3", 00:27:17.114 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:17.114 "is_configured": true, 00:27:17.114 "data_offset": 2048, 00:27:17.114 "data_size": 63488 00:27:17.114 } 00:27:17.114 ] 00:27:17.114 }' 00:27:17.114 11:38:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:17.372 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:17.372 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:17.372 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:17.372 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:17.372 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:17.372 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:17.372 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:17.372 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:17.372 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:17.373 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:17.373 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:17.373 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:17.373 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:17.373 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.373 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.631 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:17.631 "name": "raid_bdev1", 00:27:17.631 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:17.631 "strip_size_kb": 64, 00:27:17.631 "state": "online", 00:27:17.631 "raid_level": "raid5f", 00:27:17.631 "superblock": true, 00:27:17.631 "num_base_bdevs": 3, 00:27:17.631 "num_base_bdevs_discovered": 3, 00:27:17.631 "num_base_bdevs_operational": 3, 00:27:17.631 "base_bdevs_list": [ 00:27:17.631 { 00:27:17.631 "name": "spare", 00:27:17.631 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:17.631 "is_configured": true, 00:27:17.631 "data_offset": 2048, 00:27:17.631 "data_size": 63488 00:27:17.631 }, 00:27:17.631 { 00:27:17.631 "name": "BaseBdev2", 00:27:17.631 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:17.631 "is_configured": true, 00:27:17.631 "data_offset": 2048, 00:27:17.631 "data_size": 63488 00:27:17.631 }, 00:27:17.631 { 00:27:17.631 "name": "BaseBdev3", 00:27:17.631 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:17.631 "is_configured": true, 00:27:17.631 "data_offset": 2048, 00:27:17.631 "data_size": 63488 00:27:17.631 } 00:27:17.631 ] 00:27:17.631 }' 00:27:17.631 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:17.631 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:18.196 11:38:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:18.454 [2024-07-25 11:38:34.161041] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:18.454 [2024-07-25 11:38:34.161084] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:18.454 [2024-07-25 11:38:34.161204] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:18.454 [2024-07-25 11:38:34.161318] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:18.454 [2024-07-25 11:38:34.161335] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:18.454 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.454 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.712 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:18.970 /dev/nbd0 00:27:18.970 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:18.970 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:18.970 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:18.970 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:18.970 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:18.970 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:18.970 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:18.971 1+0 records in 00:27:18.971 1+0 records out 00:27:18.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220899 s, 18.5 MB/s 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:18.971 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:19.229 /dev/nbd1 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:19.229 1+0 records in 00:27:19.229 1+0 records out 00:27:19.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386959 s, 10.6 MB/s 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:19.229 11:38:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:19.487 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:19.487 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:19.487 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:19.487 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:19.487 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:19.487 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:19.487 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:19.745 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:19.746 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:19.746 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:19.746 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:19.746 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:19.746 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:19.746 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:19.746 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:19.746 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:19.746 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:20.011 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:20.011 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:20.011 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:20.011 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:20.011 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:20.011 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:20.011 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:20.011 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:20.011 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:27:20.011 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:20.268 11:38:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:20.268 [2024-07-25 11:38:36.147858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:20.269 [2024-07-25 11:38:36.147942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.269 [2024-07-25 11:38:36.147977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:20.269 [2024-07-25 11:38:36.147993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.269 [2024-07-25 11:38:36.150804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.269 [2024-07-25 11:38:36.150848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:20.269 [2024-07-25 11:38:36.151011] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:20.269 [2024-07-25 11:38:36.151081] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:20.269 [2024-07-25 11:38:36.151232] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:20.549 [2024-07-25 11:38:36.151367] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:20.549 spare 00:27:20.549 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:20.549 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:20.549 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:20.549 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:20.549 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:20.549 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:20.549 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:20.549 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:20.550 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:20.550 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:20.550 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.550 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.550 [2024-07-25 11:38:36.251495] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:20.550 [2024-07-25 11:38:36.251529] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:20.550 [2024-07-25 11:38:36.251951] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:27:20.550 [2024-07-25 11:38:36.257034] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:20.550 [2024-07-25 11:38:36.257188] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:20.550 [2024-07-25 11:38:36.257541] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:20.550 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:20.550 "name": "raid_bdev1", 00:27:20.550 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:20.550 "strip_size_kb": 64, 00:27:20.550 "state": "online", 00:27:20.550 "raid_level": "raid5f", 00:27:20.550 "superblock": true, 00:27:20.550 "num_base_bdevs": 3, 00:27:20.550 "num_base_bdevs_discovered": 3, 00:27:20.550 "num_base_bdevs_operational": 3, 00:27:20.550 "base_bdevs_list": [ 00:27:20.550 { 00:27:20.550 "name": "spare", 00:27:20.550 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:20.550 "is_configured": true, 00:27:20.550 "data_offset": 2048, 00:27:20.550 "data_size": 63488 00:27:20.550 }, 00:27:20.550 { 00:27:20.550 "name": "BaseBdev2", 00:27:20.550 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:20.550 "is_configured": true, 00:27:20.550 "data_offset": 2048, 00:27:20.550 "data_size": 63488 00:27:20.550 }, 00:27:20.550 { 00:27:20.550 "name": "BaseBdev3", 00:27:20.550 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:20.550 "is_configured": true, 00:27:20.550 "data_offset": 2048, 00:27:20.550 "data_size": 63488 00:27:20.550 } 00:27:20.550 ] 00:27:20.550 }' 00:27:20.550 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:20.550 11:38:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:21.485 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:21.485 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:21.485 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:21.485 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:21.485 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:21.485 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.485 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.742 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:21.742 "name": "raid_bdev1", 00:27:21.742 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:21.742 "strip_size_kb": 64, 00:27:21.742 "state": "online", 00:27:21.742 "raid_level": "raid5f", 00:27:21.742 "superblock": true, 00:27:21.742 "num_base_bdevs": 3, 00:27:21.742 "num_base_bdevs_discovered": 3, 00:27:21.742 "num_base_bdevs_operational": 3, 00:27:21.742 "base_bdevs_list": [ 00:27:21.742 { 00:27:21.742 "name": "spare", 00:27:21.742 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:21.742 "is_configured": true, 00:27:21.742 "data_offset": 2048, 00:27:21.742 "data_size": 63488 00:27:21.742 }, 00:27:21.742 { 00:27:21.742 "name": "BaseBdev2", 00:27:21.742 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:21.742 "is_configured": true, 00:27:21.742 "data_offset": 2048, 00:27:21.742 "data_size": 63488 00:27:21.742 }, 00:27:21.742 { 00:27:21.742 "name": "BaseBdev3", 00:27:21.742 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:21.742 "is_configured": true, 00:27:21.742 "data_offset": 2048, 00:27:21.742 "data_size": 63488 00:27:21.742 } 00:27:21.742 ] 00:27:21.742 }' 00:27:21.742 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:21.742 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:21.742 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:21.742 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:21.742 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:21.742 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.001 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.001 11:38:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:22.259 [2024-07-25 11:38:37.999692] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.259 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.517 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.517 "name": "raid_bdev1", 00:27:22.517 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:22.517 "strip_size_kb": 64, 00:27:22.517 "state": "online", 00:27:22.517 "raid_level": "raid5f", 00:27:22.517 "superblock": true, 00:27:22.517 "num_base_bdevs": 3, 00:27:22.517 "num_base_bdevs_discovered": 2, 00:27:22.517 "num_base_bdevs_operational": 2, 00:27:22.517 "base_bdevs_list": [ 00:27:22.517 { 00:27:22.517 "name": null, 00:27:22.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.517 "is_configured": false, 00:27:22.517 "data_offset": 2048, 00:27:22.517 "data_size": 63488 00:27:22.517 }, 00:27:22.517 { 00:27:22.517 "name": "BaseBdev2", 00:27:22.517 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:22.517 "is_configured": true, 00:27:22.517 "data_offset": 2048, 00:27:22.517 "data_size": 63488 00:27:22.517 }, 00:27:22.517 { 00:27:22.517 "name": "BaseBdev3", 00:27:22.517 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:22.517 "is_configured": true, 00:27:22.517 "data_offset": 2048, 00:27:22.517 "data_size": 63488 00:27:22.517 } 00:27:22.517 ] 00:27:22.517 }' 00:27:22.517 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.517 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.083 11:38:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:23.341 [2024-07-25 11:38:39.192008] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:23.341 [2024-07-25 11:38:39.192294] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:23.341 [2024-07-25 11:38:39.192318] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:23.341 [2024-07-25 11:38:39.192370] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:23.341 [2024-07-25 11:38:39.205448] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:27:23.341 [2024-07-25 11:38:39.212653] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:23.341 11:38:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:27:24.716 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:24.716 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:24.716 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:24.716 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:24.716 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:24.716 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.716 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.716 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:24.716 "name": "raid_bdev1", 00:27:24.716 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:24.716 "strip_size_kb": 64, 00:27:24.716 "state": "online", 00:27:24.716 "raid_level": "raid5f", 00:27:24.716 "superblock": true, 00:27:24.716 "num_base_bdevs": 3, 00:27:24.716 "num_base_bdevs_discovered": 3, 00:27:24.716 "num_base_bdevs_operational": 3, 00:27:24.716 "process": { 00:27:24.716 "type": "rebuild", 00:27:24.716 "target": "spare", 00:27:24.716 "progress": { 00:27:24.716 "blocks": 24576, 00:27:24.716 "percent": 19 00:27:24.716 } 00:27:24.716 }, 00:27:24.716 "base_bdevs_list": [ 00:27:24.716 { 00:27:24.716 "name": "spare", 00:27:24.716 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:24.716 "is_configured": true, 00:27:24.716 "data_offset": 2048, 00:27:24.716 "data_size": 63488 00:27:24.716 }, 00:27:24.716 { 00:27:24.716 "name": "BaseBdev2", 00:27:24.716 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:24.716 "is_configured": true, 00:27:24.716 "data_offset": 2048, 00:27:24.716 "data_size": 63488 00:27:24.716 }, 00:27:24.717 { 00:27:24.717 "name": "BaseBdev3", 00:27:24.717 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:24.717 "is_configured": true, 00:27:24.717 "data_offset": 2048, 00:27:24.717 "data_size": 63488 00:27:24.717 } 00:27:24.717 ] 00:27:24.717 }' 00:27:24.717 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:24.717 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:24.717 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:24.974 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:24.974 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:25.232 [2024-07-25 11:38:40.858750] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:25.232 [2024-07-25 11:38:40.930718] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:25.232 [2024-07-25 11:38:40.930792] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:25.232 [2024-07-25 11:38:40.930820] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:25.232 [2024-07-25 11:38:40.930831] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:25.232 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:25.232 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:25.232 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:25.232 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:25.232 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:25.232 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:25.233 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:25.233 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:25.233 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:25.233 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:25.233 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.233 11:38:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.490 11:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:25.490 "name": "raid_bdev1", 00:27:25.490 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:25.490 "strip_size_kb": 64, 00:27:25.490 "state": "online", 00:27:25.490 "raid_level": "raid5f", 00:27:25.490 "superblock": true, 00:27:25.490 "num_base_bdevs": 3, 00:27:25.490 "num_base_bdevs_discovered": 2, 00:27:25.490 "num_base_bdevs_operational": 2, 00:27:25.490 "base_bdevs_list": [ 00:27:25.490 { 00:27:25.490 "name": null, 00:27:25.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.490 "is_configured": false, 00:27:25.490 "data_offset": 2048, 00:27:25.490 "data_size": 63488 00:27:25.490 }, 00:27:25.490 { 00:27:25.490 "name": "BaseBdev2", 00:27:25.490 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:25.490 "is_configured": true, 00:27:25.490 "data_offset": 2048, 00:27:25.490 "data_size": 63488 00:27:25.490 }, 00:27:25.490 { 00:27:25.490 "name": "BaseBdev3", 00:27:25.490 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:25.490 "is_configured": true, 00:27:25.490 "data_offset": 2048, 00:27:25.490 "data_size": 63488 00:27:25.490 } 00:27:25.490 ] 00:27:25.490 }' 00:27:25.490 11:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:25.490 11:38:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:26.056 11:38:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:26.314 [2024-07-25 11:38:42.087379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:26.314 [2024-07-25 11:38:42.087632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:26.314 [2024-07-25 11:38:42.087800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:27:26.314 [2024-07-25 11:38:42.087827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:26.314 [2024-07-25 11:38:42.088426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:26.314 [2024-07-25 11:38:42.088451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:26.314 [2024-07-25 11:38:42.088581] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:26.314 [2024-07-25 11:38:42.088602] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:26.314 [2024-07-25 11:38:42.088640] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:26.314 [2024-07-25 11:38:42.088672] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:26.314 spare 00:27:26.314 [2024-07-25 11:38:42.101758] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:27:26.314 [2024-07-25 11:38:42.108950] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:26.314 11:38:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:27:27.247 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:27.247 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:27.247 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:27.247 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:27.247 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:27.505 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.505 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.762 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:27.762 "name": "raid_bdev1", 00:27:27.762 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:27.762 "strip_size_kb": 64, 00:27:27.762 "state": "online", 00:27:27.762 "raid_level": "raid5f", 00:27:27.762 "superblock": true, 00:27:27.762 "num_base_bdevs": 3, 00:27:27.762 "num_base_bdevs_discovered": 3, 00:27:27.762 "num_base_bdevs_operational": 3, 00:27:27.762 "process": { 00:27:27.762 "type": "rebuild", 00:27:27.762 "target": "spare", 00:27:27.762 "progress": { 00:27:27.762 "blocks": 24576, 00:27:27.762 "percent": 19 00:27:27.762 } 00:27:27.762 }, 00:27:27.762 "base_bdevs_list": [ 00:27:27.762 { 00:27:27.762 "name": "spare", 00:27:27.762 "uuid": "4d03faf1-3c4f-5758-8644-d5d13839cd85", 00:27:27.762 "is_configured": true, 00:27:27.762 "data_offset": 2048, 00:27:27.762 "data_size": 63488 00:27:27.762 }, 00:27:27.762 { 00:27:27.762 "name": "BaseBdev2", 00:27:27.762 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:27.762 "is_configured": true, 00:27:27.762 "data_offset": 2048, 00:27:27.762 "data_size": 63488 00:27:27.762 }, 00:27:27.762 { 00:27:27.762 "name": "BaseBdev3", 00:27:27.762 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:27.762 "is_configured": true, 00:27:27.762 "data_offset": 2048, 00:27:27.762 "data_size": 63488 00:27:27.762 } 00:27:27.762 ] 00:27:27.762 }' 00:27:27.762 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:27.762 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:27.762 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:27.762 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:27.762 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:28.019 [2024-07-25 11:38:43.723381] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:28.020 [2024-07-25 11:38:43.727124] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:28.020 [2024-07-25 11:38:43.727395] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:28.020 [2024-07-25 11:38:43.727648] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:28.020 [2024-07-25 11:38:43.727819] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.020 11:38:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.277 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:28.277 "name": "raid_bdev1", 00:27:28.277 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:28.277 "strip_size_kb": 64, 00:27:28.277 "state": "online", 00:27:28.277 "raid_level": "raid5f", 00:27:28.277 "superblock": true, 00:27:28.277 "num_base_bdevs": 3, 00:27:28.277 "num_base_bdevs_discovered": 2, 00:27:28.277 "num_base_bdevs_operational": 2, 00:27:28.277 "base_bdevs_list": [ 00:27:28.277 { 00:27:28.277 "name": null, 00:27:28.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.277 "is_configured": false, 00:27:28.277 "data_offset": 2048, 00:27:28.277 "data_size": 63488 00:27:28.277 }, 00:27:28.277 { 00:27:28.277 "name": "BaseBdev2", 00:27:28.277 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:28.277 "is_configured": true, 00:27:28.277 "data_offset": 2048, 00:27:28.277 "data_size": 63488 00:27:28.277 }, 00:27:28.277 { 00:27:28.277 "name": "BaseBdev3", 00:27:28.277 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:28.278 "is_configured": true, 00:27:28.278 "data_offset": 2048, 00:27:28.278 "data_size": 63488 00:27:28.278 } 00:27:28.278 ] 00:27:28.278 }' 00:27:28.278 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:28.278 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:28.843 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:28.843 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:28.843 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:28.843 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:28.843 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:28.843 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.843 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.100 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:29.100 "name": "raid_bdev1", 00:27:29.100 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:29.100 "strip_size_kb": 64, 00:27:29.100 "state": "online", 00:27:29.100 "raid_level": "raid5f", 00:27:29.100 "superblock": true, 00:27:29.100 "num_base_bdevs": 3, 00:27:29.100 "num_base_bdevs_discovered": 2, 00:27:29.100 "num_base_bdevs_operational": 2, 00:27:29.100 "base_bdevs_list": [ 00:27:29.100 { 00:27:29.100 "name": null, 00:27:29.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.100 "is_configured": false, 00:27:29.100 "data_offset": 2048, 00:27:29.100 "data_size": 63488 00:27:29.100 }, 00:27:29.100 { 00:27:29.100 "name": "BaseBdev2", 00:27:29.100 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:29.100 "is_configured": true, 00:27:29.100 "data_offset": 2048, 00:27:29.100 "data_size": 63488 00:27:29.100 }, 00:27:29.100 { 00:27:29.100 "name": "BaseBdev3", 00:27:29.100 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:29.100 "is_configured": true, 00:27:29.100 "data_offset": 2048, 00:27:29.100 "data_size": 63488 00:27:29.100 } 00:27:29.100 ] 00:27:29.100 }' 00:27:29.100 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:29.358 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:29.358 11:38:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:29.358 11:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:29.358 11:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:29.617 11:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:29.876 [2024-07-25 11:38:45.549731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:29.876 [2024-07-25 11:38:45.549862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:29.876 [2024-07-25 11:38:45.549896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:27:29.876 [2024-07-25 11:38:45.549916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:29.876 [2024-07-25 11:38:45.550579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:29.876 [2024-07-25 11:38:45.550607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:29.876 [2024-07-25 11:38:45.550707] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:29.876 [2024-07-25 11:38:45.550750] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:29.876 [2024-07-25 11:38:45.550764] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:29.876 BaseBdev1 00:27:29.876 11:38:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.852 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.109 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:31.109 "name": "raid_bdev1", 00:27:31.109 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:31.109 "strip_size_kb": 64, 00:27:31.109 "state": "online", 00:27:31.109 "raid_level": "raid5f", 00:27:31.109 "superblock": true, 00:27:31.109 "num_base_bdevs": 3, 00:27:31.109 "num_base_bdevs_discovered": 2, 00:27:31.109 "num_base_bdevs_operational": 2, 00:27:31.109 "base_bdevs_list": [ 00:27:31.109 { 00:27:31.109 "name": null, 00:27:31.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.109 "is_configured": false, 00:27:31.109 "data_offset": 2048, 00:27:31.109 "data_size": 63488 00:27:31.109 }, 00:27:31.109 { 00:27:31.109 "name": "BaseBdev2", 00:27:31.109 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:31.109 "is_configured": true, 00:27:31.109 "data_offset": 2048, 00:27:31.109 "data_size": 63488 00:27:31.109 }, 00:27:31.109 { 00:27:31.109 "name": "BaseBdev3", 00:27:31.109 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:31.109 "is_configured": true, 00:27:31.109 "data_offset": 2048, 00:27:31.109 "data_size": 63488 00:27:31.109 } 00:27:31.109 ] 00:27:31.109 }' 00:27:31.109 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:31.109 11:38:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:31.673 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:31.673 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:31.673 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:31.673 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:31.673 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:31.673 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.673 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.931 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:31.931 "name": "raid_bdev1", 00:27:31.931 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:31.931 "strip_size_kb": 64, 00:27:31.931 "state": "online", 00:27:31.931 "raid_level": "raid5f", 00:27:31.931 "superblock": true, 00:27:31.931 "num_base_bdevs": 3, 00:27:31.931 "num_base_bdevs_discovered": 2, 00:27:31.931 "num_base_bdevs_operational": 2, 00:27:31.931 "base_bdevs_list": [ 00:27:31.931 { 00:27:31.931 "name": null, 00:27:31.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.931 "is_configured": false, 00:27:31.931 "data_offset": 2048, 00:27:31.931 "data_size": 63488 00:27:31.931 }, 00:27:31.931 { 00:27:31.931 "name": "BaseBdev2", 00:27:31.931 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:31.931 "is_configured": true, 00:27:31.931 "data_offset": 2048, 00:27:31.931 "data_size": 63488 00:27:31.931 }, 00:27:31.931 { 00:27:31.931 "name": "BaseBdev3", 00:27:31.931 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:31.931 "is_configured": true, 00:27:31.931 "data_offset": 2048, 00:27:31.931 "data_size": 63488 00:27:31.931 } 00:27:31.931 ] 00:27:31.931 }' 00:27:31.931 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:32.188 11:38:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:32.447 [2024-07-25 11:38:48.126741] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:32.447 [2024-07-25 11:38:48.126949] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:32.447 [2024-07-25 11:38:48.126974] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:32.447 request: 00:27:32.447 { 00:27:32.447 "base_bdev": "BaseBdev1", 00:27:32.447 "raid_bdev": "raid_bdev1", 00:27:32.447 "method": "bdev_raid_add_base_bdev", 00:27:32.447 "req_id": 1 00:27:32.447 } 00:27:32.447 Got JSON-RPC error response 00:27:32.447 response: 00:27:32.447 { 00:27:32.447 "code": -22, 00:27:32.447 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:32.447 } 00:27:32.447 11:38:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:27:32.447 11:38:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:32.447 11:38:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:32.447 11:38:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:32.447 11:38:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.379 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.637 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:33.637 "name": "raid_bdev1", 00:27:33.637 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:33.637 "strip_size_kb": 64, 00:27:33.637 "state": "online", 00:27:33.637 "raid_level": "raid5f", 00:27:33.637 "superblock": true, 00:27:33.637 "num_base_bdevs": 3, 00:27:33.637 "num_base_bdevs_discovered": 2, 00:27:33.637 "num_base_bdevs_operational": 2, 00:27:33.637 "base_bdevs_list": [ 00:27:33.637 { 00:27:33.637 "name": null, 00:27:33.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.637 "is_configured": false, 00:27:33.637 "data_offset": 2048, 00:27:33.637 "data_size": 63488 00:27:33.637 }, 00:27:33.637 { 00:27:33.637 "name": "BaseBdev2", 00:27:33.637 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:33.637 "is_configured": true, 00:27:33.637 "data_offset": 2048, 00:27:33.637 "data_size": 63488 00:27:33.637 }, 00:27:33.637 { 00:27:33.637 "name": "BaseBdev3", 00:27:33.637 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:33.637 "is_configured": true, 00:27:33.637 "data_offset": 2048, 00:27:33.637 "data_size": 63488 00:27:33.637 } 00:27:33.637 ] 00:27:33.637 }' 00:27:33.637 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:33.637 11:38:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:34.576 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:34.576 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:34.576 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:34.576 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:34.576 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:34.576 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.576 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.576 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:34.576 "name": "raid_bdev1", 00:27:34.576 "uuid": "27254200-8ec1-4ce8-abe8-360ad310e630", 00:27:34.576 "strip_size_kb": 64, 00:27:34.576 "state": "online", 00:27:34.576 "raid_level": "raid5f", 00:27:34.576 "superblock": true, 00:27:34.576 "num_base_bdevs": 3, 00:27:34.576 "num_base_bdevs_discovered": 2, 00:27:34.576 "num_base_bdevs_operational": 2, 00:27:34.576 "base_bdevs_list": [ 00:27:34.576 { 00:27:34.576 "name": null, 00:27:34.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.576 "is_configured": false, 00:27:34.576 "data_offset": 2048, 00:27:34.576 "data_size": 63488 00:27:34.576 }, 00:27:34.576 { 00:27:34.576 "name": "BaseBdev2", 00:27:34.576 "uuid": "45b14e3f-5a0d-567f-a45a-fbe88defeea4", 00:27:34.576 "is_configured": true, 00:27:34.576 "data_offset": 2048, 00:27:34.576 "data_size": 63488 00:27:34.576 }, 00:27:34.576 { 00:27:34.576 "name": "BaseBdev3", 00:27:34.576 "uuid": "a9e55458-101c-5d83-974d-2d6a7417235c", 00:27:34.576 "is_configured": true, 00:27:34.576 "data_offset": 2048, 00:27:34.576 "data_size": 63488 00:27:34.576 } 00:27:34.576 ] 00:27:34.576 }' 00:27:34.576 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 94530 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94530 ']' 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 94530 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94530 00:27:34.835 killing process with pid 94530 00:27:34.835 Received shutdown signal, test time was about 60.000000 seconds 00:27:34.835 00:27:34.835 Latency(us) 00:27:34.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.835 =================================================================================================================== 00:27:34.835 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94530' 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 94530 00:27:34.835 [2024-07-25 11:38:50.544860] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:34.835 11:38:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 94530 00:27:34.835 [2024-07-25 11:38:50.545017] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:34.835 [2024-07-25 11:38:50.545096] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:34.835 [2024-07-25 11:38:50.545118] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:35.094 [2024-07-25 11:38:50.904612] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:36.470 11:38:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:27:36.470 ************************************ 00:27:36.470 END TEST raid5f_rebuild_test_sb 00:27:36.470 ************************************ 00:27:36.470 00:27:36.470 real 0m37.551s 00:27:36.470 user 0m59.091s 00:27:36.470 sys 0m4.176s 00:27:36.470 11:38:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:36.470 11:38:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.470 11:38:52 bdev_raid -- bdev/bdev_raid.sh@964 -- # for n in {3..4} 00:27:36.470 11:38:52 bdev_raid -- bdev/bdev_raid.sh@965 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:27:36.470 11:38:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:36.470 11:38:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:36.470 11:38:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:36.470 ************************************ 00:27:36.470 START TEST raid5f_state_function_test 00:27:36.470 ************************************ 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=95421 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 95421' 00:27:36.470 Process raid pid: 95421 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 95421 /var/tmp/spdk-raid.sock 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 95421 ']' 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:36.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:36.470 11:38:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:36.470 [2024-07-25 11:38:52.292771] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:27:36.470 [2024-07-25 11:38:52.293198] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.729 [2024-07-25 11:38:52.469989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.987 [2024-07-25 11:38:52.717662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.245 [2024-07-25 11:38:52.928369] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:37.245 [2024-07-25 11:38:52.928607] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:37.504 11:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:37.504 11:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:27:37.504 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:37.762 [2024-07-25 11:38:53.571984] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:37.762 [2024-07-25 11:38:53.572262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:37.762 [2024-07-25 11:38:53.572294] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:37.762 [2024-07-25 11:38:53.572309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:37.762 [2024-07-25 11:38:53.572323] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:37.762 [2024-07-25 11:38:53.572336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:37.762 [2024-07-25 11:38:53.572348] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:37.762 [2024-07-25 11:38:53.572360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.762 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:38.020 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:38.020 "name": "Existed_Raid", 00:27:38.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.020 "strip_size_kb": 64, 00:27:38.020 "state": "configuring", 00:27:38.020 "raid_level": "raid5f", 00:27:38.020 "superblock": false, 00:27:38.020 "num_base_bdevs": 4, 00:27:38.020 "num_base_bdevs_discovered": 0, 00:27:38.020 "num_base_bdevs_operational": 4, 00:27:38.020 "base_bdevs_list": [ 00:27:38.020 { 00:27:38.020 "name": "BaseBdev1", 00:27:38.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.020 "is_configured": false, 00:27:38.020 "data_offset": 0, 00:27:38.020 "data_size": 0 00:27:38.020 }, 00:27:38.020 { 00:27:38.020 "name": "BaseBdev2", 00:27:38.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.020 "is_configured": false, 00:27:38.020 "data_offset": 0, 00:27:38.020 "data_size": 0 00:27:38.020 }, 00:27:38.020 { 00:27:38.020 "name": "BaseBdev3", 00:27:38.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.020 "is_configured": false, 00:27:38.020 "data_offset": 0, 00:27:38.020 "data_size": 0 00:27:38.020 }, 00:27:38.020 { 00:27:38.020 "name": "BaseBdev4", 00:27:38.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.020 "is_configured": false, 00:27:38.020 "data_offset": 0, 00:27:38.020 "data_size": 0 00:27:38.020 } 00:27:38.020 ] 00:27:38.020 }' 00:27:38.020 11:38:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:38.020 11:38:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.954 11:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:38.954 [2024-07-25 11:38:54.724200] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:38.954 [2024-07-25 11:38:54.724251] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:38.954 11:38:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:39.212 [2024-07-25 11:38:54.988305] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:39.212 [2024-07-25 11:38:54.988367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:39.212 [2024-07-25 11:38:54.988402] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:39.212 [2024-07-25 11:38:54.988420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:39.212 [2024-07-25 11:38:54.988440] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:39.212 [2024-07-25 11:38:54.988459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:39.212 [2024-07-25 11:38:54.988475] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:39.212 [2024-07-25 11:38:54.988486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:39.212 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:39.470 [2024-07-25 11:38:55.262997] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:39.470 BaseBdev1 00:27:39.470 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:39.470 11:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:39.470 11:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:39.470 11:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:39.470 11:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:39.470 11:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:39.470 11:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:39.728 11:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:39.986 [ 00:27:39.986 { 00:27:39.986 "name": "BaseBdev1", 00:27:39.986 "aliases": [ 00:27:39.986 "985b4183-a59a-4739-b7fc-b9675b1c6ebb" 00:27:39.986 ], 00:27:39.986 "product_name": "Malloc disk", 00:27:39.986 "block_size": 512, 00:27:39.986 "num_blocks": 65536, 00:27:39.986 "uuid": "985b4183-a59a-4739-b7fc-b9675b1c6ebb", 00:27:39.986 "assigned_rate_limits": { 00:27:39.986 "rw_ios_per_sec": 0, 00:27:39.986 "rw_mbytes_per_sec": 0, 00:27:39.986 "r_mbytes_per_sec": 0, 00:27:39.986 "w_mbytes_per_sec": 0 00:27:39.986 }, 00:27:39.986 "claimed": true, 00:27:39.986 "claim_type": "exclusive_write", 00:27:39.986 "zoned": false, 00:27:39.986 "supported_io_types": { 00:27:39.986 "read": true, 00:27:39.986 "write": true, 00:27:39.986 "unmap": true, 00:27:39.986 "flush": true, 00:27:39.986 "reset": true, 00:27:39.986 "nvme_admin": false, 00:27:39.986 "nvme_io": false, 00:27:39.986 "nvme_io_md": false, 00:27:39.986 "write_zeroes": true, 00:27:39.986 "zcopy": true, 00:27:39.986 "get_zone_info": false, 00:27:39.986 "zone_management": false, 00:27:39.986 "zone_append": false, 00:27:39.986 "compare": false, 00:27:39.987 "compare_and_write": false, 00:27:39.987 "abort": true, 00:27:39.987 "seek_hole": false, 00:27:39.987 "seek_data": false, 00:27:39.987 "copy": true, 00:27:39.987 "nvme_iov_md": false 00:27:39.987 }, 00:27:39.987 "memory_domains": [ 00:27:39.987 { 00:27:39.987 "dma_device_id": "system", 00:27:39.987 "dma_device_type": 1 00:27:39.987 }, 00:27:39.987 { 00:27:39.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.987 "dma_device_type": 2 00:27:39.987 } 00:27:39.987 ], 00:27:39.987 "driver_specific": {} 00:27:39.987 } 00:27:39.987 ] 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.987 11:38:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:40.245 11:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:40.245 "name": "Existed_Raid", 00:27:40.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.245 "strip_size_kb": 64, 00:27:40.245 "state": "configuring", 00:27:40.245 "raid_level": "raid5f", 00:27:40.245 "superblock": false, 00:27:40.245 "num_base_bdevs": 4, 00:27:40.245 "num_base_bdevs_discovered": 1, 00:27:40.245 "num_base_bdevs_operational": 4, 00:27:40.245 "base_bdevs_list": [ 00:27:40.245 { 00:27:40.245 "name": "BaseBdev1", 00:27:40.245 "uuid": "985b4183-a59a-4739-b7fc-b9675b1c6ebb", 00:27:40.245 "is_configured": true, 00:27:40.245 "data_offset": 0, 00:27:40.245 "data_size": 65536 00:27:40.245 }, 00:27:40.245 { 00:27:40.245 "name": "BaseBdev2", 00:27:40.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.245 "is_configured": false, 00:27:40.245 "data_offset": 0, 00:27:40.245 "data_size": 0 00:27:40.245 }, 00:27:40.245 { 00:27:40.245 "name": "BaseBdev3", 00:27:40.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.245 "is_configured": false, 00:27:40.245 "data_offset": 0, 00:27:40.245 "data_size": 0 00:27:40.245 }, 00:27:40.245 { 00:27:40.245 "name": "BaseBdev4", 00:27:40.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.245 "is_configured": false, 00:27:40.245 "data_offset": 0, 00:27:40.245 "data_size": 0 00:27:40.245 } 00:27:40.245 ] 00:27:40.245 }' 00:27:40.245 11:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:40.245 11:38:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:40.812 11:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:41.070 [2024-07-25 11:38:56.915551] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:41.070 [2024-07-25 11:38:56.915648] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:41.070 11:38:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:41.327 [2024-07-25 11:38:57.203669] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:41.327 [2024-07-25 11:38:57.206031] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:41.327 [2024-07-25 11:38:57.206085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:41.327 [2024-07-25 11:38:57.206106] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:41.327 [2024-07-25 11:38:57.206119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:41.327 [2024-07-25 11:38:57.206135] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:41.327 [2024-07-25 11:38:57.206147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.585 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.843 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:41.843 "name": "Existed_Raid", 00:27:41.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.843 "strip_size_kb": 64, 00:27:41.843 "state": "configuring", 00:27:41.843 "raid_level": "raid5f", 00:27:41.843 "superblock": false, 00:27:41.843 "num_base_bdevs": 4, 00:27:41.843 "num_base_bdevs_discovered": 1, 00:27:41.843 "num_base_bdevs_operational": 4, 00:27:41.843 "base_bdevs_list": [ 00:27:41.843 { 00:27:41.843 "name": "BaseBdev1", 00:27:41.843 "uuid": "985b4183-a59a-4739-b7fc-b9675b1c6ebb", 00:27:41.843 "is_configured": true, 00:27:41.843 "data_offset": 0, 00:27:41.843 "data_size": 65536 00:27:41.843 }, 00:27:41.843 { 00:27:41.843 "name": "BaseBdev2", 00:27:41.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.843 "is_configured": false, 00:27:41.843 "data_offset": 0, 00:27:41.843 "data_size": 0 00:27:41.843 }, 00:27:41.843 { 00:27:41.843 "name": "BaseBdev3", 00:27:41.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.843 "is_configured": false, 00:27:41.843 "data_offset": 0, 00:27:41.843 "data_size": 0 00:27:41.843 }, 00:27:41.843 { 00:27:41.843 "name": "BaseBdev4", 00:27:41.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.843 "is_configured": false, 00:27:41.843 "data_offset": 0, 00:27:41.843 "data_size": 0 00:27:41.843 } 00:27:41.843 ] 00:27:41.843 }' 00:27:41.843 11:38:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:41.843 11:38:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:42.410 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:42.669 [2024-07-25 11:38:58.447534] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:42.669 BaseBdev2 00:27:42.669 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:42.669 11:38:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:42.669 11:38:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:42.669 11:38:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:42.669 11:38:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:42.669 11:38:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:42.669 11:38:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:42.989 11:38:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:43.248 [ 00:27:43.248 { 00:27:43.248 "name": "BaseBdev2", 00:27:43.248 "aliases": [ 00:27:43.248 "f65bcd54-a314-4d3d-8dc8-e53e7e9aa649" 00:27:43.248 ], 00:27:43.248 "product_name": "Malloc disk", 00:27:43.248 "block_size": 512, 00:27:43.248 "num_blocks": 65536, 00:27:43.248 "uuid": "f65bcd54-a314-4d3d-8dc8-e53e7e9aa649", 00:27:43.248 "assigned_rate_limits": { 00:27:43.248 "rw_ios_per_sec": 0, 00:27:43.248 "rw_mbytes_per_sec": 0, 00:27:43.248 "r_mbytes_per_sec": 0, 00:27:43.248 "w_mbytes_per_sec": 0 00:27:43.248 }, 00:27:43.248 "claimed": true, 00:27:43.248 "claim_type": "exclusive_write", 00:27:43.248 "zoned": false, 00:27:43.248 "supported_io_types": { 00:27:43.248 "read": true, 00:27:43.248 "write": true, 00:27:43.248 "unmap": true, 00:27:43.248 "flush": true, 00:27:43.248 "reset": true, 00:27:43.248 "nvme_admin": false, 00:27:43.248 "nvme_io": false, 00:27:43.248 "nvme_io_md": false, 00:27:43.248 "write_zeroes": true, 00:27:43.248 "zcopy": true, 00:27:43.248 "get_zone_info": false, 00:27:43.248 "zone_management": false, 00:27:43.248 "zone_append": false, 00:27:43.248 "compare": false, 00:27:43.248 "compare_and_write": false, 00:27:43.248 "abort": true, 00:27:43.248 "seek_hole": false, 00:27:43.248 "seek_data": false, 00:27:43.248 "copy": true, 00:27:43.248 "nvme_iov_md": false 00:27:43.248 }, 00:27:43.248 "memory_domains": [ 00:27:43.248 { 00:27:43.248 "dma_device_id": "system", 00:27:43.248 "dma_device_type": 1 00:27:43.248 }, 00:27:43.248 { 00:27:43.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:43.248 "dma_device_type": 2 00:27:43.248 } 00:27:43.248 ], 00:27:43.248 "driver_specific": {} 00:27:43.248 } 00:27:43.248 ] 00:27:43.248 11:38:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:43.248 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:43.248 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:43.248 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:43.248 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:43.248 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:43.248 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:43.248 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:43.248 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:43.248 11:38:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:43.248 11:38:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:43.248 11:38:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:43.248 11:38:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:43.248 11:38:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.248 11:38:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:43.506 11:38:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:43.506 "name": "Existed_Raid", 00:27:43.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.506 "strip_size_kb": 64, 00:27:43.506 "state": "configuring", 00:27:43.506 "raid_level": "raid5f", 00:27:43.506 "superblock": false, 00:27:43.506 "num_base_bdevs": 4, 00:27:43.506 "num_base_bdevs_discovered": 2, 00:27:43.506 "num_base_bdevs_operational": 4, 00:27:43.506 "base_bdevs_list": [ 00:27:43.506 { 00:27:43.506 "name": "BaseBdev1", 00:27:43.506 "uuid": "985b4183-a59a-4739-b7fc-b9675b1c6ebb", 00:27:43.506 "is_configured": true, 00:27:43.506 "data_offset": 0, 00:27:43.506 "data_size": 65536 00:27:43.506 }, 00:27:43.506 { 00:27:43.506 "name": "BaseBdev2", 00:27:43.506 "uuid": "f65bcd54-a314-4d3d-8dc8-e53e7e9aa649", 00:27:43.506 "is_configured": true, 00:27:43.506 "data_offset": 0, 00:27:43.506 "data_size": 65536 00:27:43.506 }, 00:27:43.506 { 00:27:43.506 "name": "BaseBdev3", 00:27:43.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.506 "is_configured": false, 00:27:43.506 "data_offset": 0, 00:27:43.506 "data_size": 0 00:27:43.506 }, 00:27:43.506 { 00:27:43.506 "name": "BaseBdev4", 00:27:43.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.506 "is_configured": false, 00:27:43.506 "data_offset": 0, 00:27:43.506 "data_size": 0 00:27:43.506 } 00:27:43.506 ] 00:27:43.506 }' 00:27:43.506 11:38:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:43.506 11:38:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.441 11:38:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:44.441 [2024-07-25 11:39:00.289070] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:44.441 BaseBdev3 00:27:44.441 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:27:44.441 11:39:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:27:44.441 11:39:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:44.441 11:39:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:44.441 11:39:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:44.441 11:39:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:44.441 11:39:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:45.008 [ 00:27:45.008 { 00:27:45.008 "name": "BaseBdev3", 00:27:45.008 "aliases": [ 00:27:45.008 "ac6c52e2-f6db-48db-9604-6657443598f2" 00:27:45.008 ], 00:27:45.008 "product_name": "Malloc disk", 00:27:45.008 "block_size": 512, 00:27:45.008 "num_blocks": 65536, 00:27:45.008 "uuid": "ac6c52e2-f6db-48db-9604-6657443598f2", 00:27:45.008 "assigned_rate_limits": { 00:27:45.008 "rw_ios_per_sec": 0, 00:27:45.008 "rw_mbytes_per_sec": 0, 00:27:45.008 "r_mbytes_per_sec": 0, 00:27:45.008 "w_mbytes_per_sec": 0 00:27:45.008 }, 00:27:45.008 "claimed": true, 00:27:45.008 "claim_type": "exclusive_write", 00:27:45.008 "zoned": false, 00:27:45.008 "supported_io_types": { 00:27:45.008 "read": true, 00:27:45.008 "write": true, 00:27:45.008 "unmap": true, 00:27:45.008 "flush": true, 00:27:45.008 "reset": true, 00:27:45.008 "nvme_admin": false, 00:27:45.008 "nvme_io": false, 00:27:45.008 "nvme_io_md": false, 00:27:45.008 "write_zeroes": true, 00:27:45.008 "zcopy": true, 00:27:45.008 "get_zone_info": false, 00:27:45.008 "zone_management": false, 00:27:45.008 "zone_append": false, 00:27:45.008 "compare": false, 00:27:45.008 "compare_and_write": false, 00:27:45.008 "abort": true, 00:27:45.008 "seek_hole": false, 00:27:45.008 "seek_data": false, 00:27:45.008 "copy": true, 00:27:45.008 "nvme_iov_md": false 00:27:45.008 }, 00:27:45.008 "memory_domains": [ 00:27:45.008 { 00:27:45.008 "dma_device_id": "system", 00:27:45.008 "dma_device_type": 1 00:27:45.008 }, 00:27:45.008 { 00:27:45.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:45.008 "dma_device_type": 2 00:27:45.008 } 00:27:45.008 ], 00:27:45.008 "driver_specific": {} 00:27:45.008 } 00:27:45.008 ] 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:45.008 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:45.266 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.266 11:39:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.524 11:39:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.524 "name": "Existed_Raid", 00:27:45.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.524 "strip_size_kb": 64, 00:27:45.524 "state": "configuring", 00:27:45.524 "raid_level": "raid5f", 00:27:45.524 "superblock": false, 00:27:45.524 "num_base_bdevs": 4, 00:27:45.524 "num_base_bdevs_discovered": 3, 00:27:45.524 "num_base_bdevs_operational": 4, 00:27:45.524 "base_bdevs_list": [ 00:27:45.524 { 00:27:45.524 "name": "BaseBdev1", 00:27:45.524 "uuid": "985b4183-a59a-4739-b7fc-b9675b1c6ebb", 00:27:45.524 "is_configured": true, 00:27:45.524 "data_offset": 0, 00:27:45.524 "data_size": 65536 00:27:45.524 }, 00:27:45.524 { 00:27:45.524 "name": "BaseBdev2", 00:27:45.524 "uuid": "f65bcd54-a314-4d3d-8dc8-e53e7e9aa649", 00:27:45.524 "is_configured": true, 00:27:45.525 "data_offset": 0, 00:27:45.525 "data_size": 65536 00:27:45.525 }, 00:27:45.525 { 00:27:45.525 "name": "BaseBdev3", 00:27:45.525 "uuid": "ac6c52e2-f6db-48db-9604-6657443598f2", 00:27:45.525 "is_configured": true, 00:27:45.525 "data_offset": 0, 00:27:45.525 "data_size": 65536 00:27:45.525 }, 00:27:45.525 { 00:27:45.525 "name": "BaseBdev4", 00:27:45.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.525 "is_configured": false, 00:27:45.525 "data_offset": 0, 00:27:45.525 "data_size": 0 00:27:45.525 } 00:27:45.525 ] 00:27:45.525 }' 00:27:45.525 11:39:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.525 11:39:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.138 11:39:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:46.398 [2024-07-25 11:39:02.125498] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:46.398 [2024-07-25 11:39:02.125583] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:46.398 [2024-07-25 11:39:02.125599] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:46.398 [2024-07-25 11:39:02.125974] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:46.398 [2024-07-25 11:39:02.132672] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:46.398 [2024-07-25 11:39:02.132696] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:46.398 [2024-07-25 11:39:02.133025] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.398 BaseBdev4 00:27:46.398 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:27:46.398 11:39:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:27:46.398 11:39:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:46.398 11:39:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:46.398 11:39:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:46.398 11:39:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:46.398 11:39:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:46.656 11:39:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:46.914 [ 00:27:46.914 { 00:27:46.914 "name": "BaseBdev4", 00:27:46.914 "aliases": [ 00:27:46.914 "8c55c605-93ba-4e15-8f89-e64c064a231b" 00:27:46.914 ], 00:27:46.914 "product_name": "Malloc disk", 00:27:46.914 "block_size": 512, 00:27:46.914 "num_blocks": 65536, 00:27:46.914 "uuid": "8c55c605-93ba-4e15-8f89-e64c064a231b", 00:27:46.914 "assigned_rate_limits": { 00:27:46.914 "rw_ios_per_sec": 0, 00:27:46.914 "rw_mbytes_per_sec": 0, 00:27:46.914 "r_mbytes_per_sec": 0, 00:27:46.914 "w_mbytes_per_sec": 0 00:27:46.914 }, 00:27:46.914 "claimed": true, 00:27:46.914 "claim_type": "exclusive_write", 00:27:46.914 "zoned": false, 00:27:46.914 "supported_io_types": { 00:27:46.914 "read": true, 00:27:46.914 "write": true, 00:27:46.914 "unmap": true, 00:27:46.914 "flush": true, 00:27:46.914 "reset": true, 00:27:46.914 "nvme_admin": false, 00:27:46.914 "nvme_io": false, 00:27:46.914 "nvme_io_md": false, 00:27:46.914 "write_zeroes": true, 00:27:46.914 "zcopy": true, 00:27:46.914 "get_zone_info": false, 00:27:46.914 "zone_management": false, 00:27:46.914 "zone_append": false, 00:27:46.914 "compare": false, 00:27:46.914 "compare_and_write": false, 00:27:46.914 "abort": true, 00:27:46.914 "seek_hole": false, 00:27:46.914 "seek_data": false, 00:27:46.914 "copy": true, 00:27:46.914 "nvme_iov_md": false 00:27:46.914 }, 00:27:46.914 "memory_domains": [ 00:27:46.914 { 00:27:46.914 "dma_device_id": "system", 00:27:46.914 "dma_device_type": 1 00:27:46.914 }, 00:27:46.914 { 00:27:46.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.914 "dma_device_type": 2 00:27:46.914 } 00:27:46.914 ], 00:27:46.914 "driver_specific": {} 00:27:46.914 } 00:27:46.914 ] 00:27:46.914 11:39:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.915 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:47.173 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:47.173 "name": "Existed_Raid", 00:27:47.173 "uuid": "30e9a926-8fe3-4a24-92a9-1bd1b25c8b74", 00:27:47.173 "strip_size_kb": 64, 00:27:47.173 "state": "online", 00:27:47.173 "raid_level": "raid5f", 00:27:47.173 "superblock": false, 00:27:47.173 "num_base_bdevs": 4, 00:27:47.173 "num_base_bdevs_discovered": 4, 00:27:47.173 "num_base_bdevs_operational": 4, 00:27:47.173 "base_bdevs_list": [ 00:27:47.173 { 00:27:47.173 "name": "BaseBdev1", 00:27:47.173 "uuid": "985b4183-a59a-4739-b7fc-b9675b1c6ebb", 00:27:47.173 "is_configured": true, 00:27:47.173 "data_offset": 0, 00:27:47.173 "data_size": 65536 00:27:47.173 }, 00:27:47.173 { 00:27:47.173 "name": "BaseBdev2", 00:27:47.173 "uuid": "f65bcd54-a314-4d3d-8dc8-e53e7e9aa649", 00:27:47.173 "is_configured": true, 00:27:47.173 "data_offset": 0, 00:27:47.173 "data_size": 65536 00:27:47.173 }, 00:27:47.173 { 00:27:47.173 "name": "BaseBdev3", 00:27:47.173 "uuid": "ac6c52e2-f6db-48db-9604-6657443598f2", 00:27:47.173 "is_configured": true, 00:27:47.173 "data_offset": 0, 00:27:47.173 "data_size": 65536 00:27:47.173 }, 00:27:47.173 { 00:27:47.173 "name": "BaseBdev4", 00:27:47.173 "uuid": "8c55c605-93ba-4e15-8f89-e64c064a231b", 00:27:47.173 "is_configured": true, 00:27:47.173 "data_offset": 0, 00:27:47.173 "data_size": 65536 00:27:47.173 } 00:27:47.173 ] 00:27:47.173 }' 00:27:47.173 11:39:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:47.173 11:39:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.739 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:47.739 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:47.739 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:47.739 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:47.739 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:47.739 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:27:47.739 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:47.739 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:47.997 [2024-07-25 11:39:03.869082] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:48.256 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:48.256 "name": "Existed_Raid", 00:27:48.256 "aliases": [ 00:27:48.256 "30e9a926-8fe3-4a24-92a9-1bd1b25c8b74" 00:27:48.256 ], 00:27:48.256 "product_name": "Raid Volume", 00:27:48.256 "block_size": 512, 00:27:48.256 "num_blocks": 196608, 00:27:48.256 "uuid": "30e9a926-8fe3-4a24-92a9-1bd1b25c8b74", 00:27:48.256 "assigned_rate_limits": { 00:27:48.256 "rw_ios_per_sec": 0, 00:27:48.256 "rw_mbytes_per_sec": 0, 00:27:48.256 "r_mbytes_per_sec": 0, 00:27:48.256 "w_mbytes_per_sec": 0 00:27:48.256 }, 00:27:48.256 "claimed": false, 00:27:48.256 "zoned": false, 00:27:48.256 "supported_io_types": { 00:27:48.256 "read": true, 00:27:48.256 "write": true, 00:27:48.256 "unmap": false, 00:27:48.256 "flush": false, 00:27:48.256 "reset": true, 00:27:48.256 "nvme_admin": false, 00:27:48.256 "nvme_io": false, 00:27:48.256 "nvme_io_md": false, 00:27:48.256 "write_zeroes": true, 00:27:48.256 "zcopy": false, 00:27:48.256 "get_zone_info": false, 00:27:48.256 "zone_management": false, 00:27:48.256 "zone_append": false, 00:27:48.256 "compare": false, 00:27:48.256 "compare_and_write": false, 00:27:48.256 "abort": false, 00:27:48.256 "seek_hole": false, 00:27:48.256 "seek_data": false, 00:27:48.256 "copy": false, 00:27:48.256 "nvme_iov_md": false 00:27:48.256 }, 00:27:48.256 "driver_specific": { 00:27:48.256 "raid": { 00:27:48.256 "uuid": "30e9a926-8fe3-4a24-92a9-1bd1b25c8b74", 00:27:48.256 "strip_size_kb": 64, 00:27:48.256 "state": "online", 00:27:48.256 "raid_level": "raid5f", 00:27:48.256 "superblock": false, 00:27:48.256 "num_base_bdevs": 4, 00:27:48.256 "num_base_bdevs_discovered": 4, 00:27:48.256 "num_base_bdevs_operational": 4, 00:27:48.256 "base_bdevs_list": [ 00:27:48.256 { 00:27:48.256 "name": "BaseBdev1", 00:27:48.256 "uuid": "985b4183-a59a-4739-b7fc-b9675b1c6ebb", 00:27:48.256 "is_configured": true, 00:27:48.256 "data_offset": 0, 00:27:48.256 "data_size": 65536 00:27:48.256 }, 00:27:48.256 { 00:27:48.256 "name": "BaseBdev2", 00:27:48.256 "uuid": "f65bcd54-a314-4d3d-8dc8-e53e7e9aa649", 00:27:48.256 "is_configured": true, 00:27:48.256 "data_offset": 0, 00:27:48.256 "data_size": 65536 00:27:48.256 }, 00:27:48.256 { 00:27:48.256 "name": "BaseBdev3", 00:27:48.256 "uuid": "ac6c52e2-f6db-48db-9604-6657443598f2", 00:27:48.256 "is_configured": true, 00:27:48.256 "data_offset": 0, 00:27:48.256 "data_size": 65536 00:27:48.256 }, 00:27:48.256 { 00:27:48.256 "name": "BaseBdev4", 00:27:48.256 "uuid": "8c55c605-93ba-4e15-8f89-e64c064a231b", 00:27:48.256 "is_configured": true, 00:27:48.256 "data_offset": 0, 00:27:48.256 "data_size": 65536 00:27:48.256 } 00:27:48.256 ] 00:27:48.256 } 00:27:48.256 } 00:27:48.256 }' 00:27:48.256 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:48.256 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:48.256 BaseBdev2 00:27:48.256 BaseBdev3 00:27:48.256 BaseBdev4' 00:27:48.256 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:48.256 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:48.256 11:39:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:48.515 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:48.515 "name": "BaseBdev1", 00:27:48.515 "aliases": [ 00:27:48.515 "985b4183-a59a-4739-b7fc-b9675b1c6ebb" 00:27:48.515 ], 00:27:48.515 "product_name": "Malloc disk", 00:27:48.515 "block_size": 512, 00:27:48.515 "num_blocks": 65536, 00:27:48.515 "uuid": "985b4183-a59a-4739-b7fc-b9675b1c6ebb", 00:27:48.515 "assigned_rate_limits": { 00:27:48.515 "rw_ios_per_sec": 0, 00:27:48.515 "rw_mbytes_per_sec": 0, 00:27:48.515 "r_mbytes_per_sec": 0, 00:27:48.515 "w_mbytes_per_sec": 0 00:27:48.515 }, 00:27:48.515 "claimed": true, 00:27:48.515 "claim_type": "exclusive_write", 00:27:48.515 "zoned": false, 00:27:48.515 "supported_io_types": { 00:27:48.515 "read": true, 00:27:48.515 "write": true, 00:27:48.515 "unmap": true, 00:27:48.515 "flush": true, 00:27:48.515 "reset": true, 00:27:48.515 "nvme_admin": false, 00:27:48.515 "nvme_io": false, 00:27:48.515 "nvme_io_md": false, 00:27:48.515 "write_zeroes": true, 00:27:48.515 "zcopy": true, 00:27:48.515 "get_zone_info": false, 00:27:48.515 "zone_management": false, 00:27:48.515 "zone_append": false, 00:27:48.515 "compare": false, 00:27:48.515 "compare_and_write": false, 00:27:48.515 "abort": true, 00:27:48.515 "seek_hole": false, 00:27:48.515 "seek_data": false, 00:27:48.515 "copy": true, 00:27:48.515 "nvme_iov_md": false 00:27:48.515 }, 00:27:48.515 "memory_domains": [ 00:27:48.515 { 00:27:48.515 "dma_device_id": "system", 00:27:48.515 "dma_device_type": 1 00:27:48.515 }, 00:27:48.515 { 00:27:48.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:48.515 "dma_device_type": 2 00:27:48.515 } 00:27:48.515 ], 00:27:48.515 "driver_specific": {} 00:27:48.515 }' 00:27:48.515 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:48.515 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:48.515 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:48.515 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:48.515 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:48.515 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:48.515 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:48.774 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:48.774 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:48.774 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:48.774 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:48.774 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:48.774 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:48.774 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:48.774 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:49.063 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:49.063 "name": "BaseBdev2", 00:27:49.063 "aliases": [ 00:27:49.063 "f65bcd54-a314-4d3d-8dc8-e53e7e9aa649" 00:27:49.063 ], 00:27:49.063 "product_name": "Malloc disk", 00:27:49.063 "block_size": 512, 00:27:49.063 "num_blocks": 65536, 00:27:49.063 "uuid": "f65bcd54-a314-4d3d-8dc8-e53e7e9aa649", 00:27:49.063 "assigned_rate_limits": { 00:27:49.063 "rw_ios_per_sec": 0, 00:27:49.063 "rw_mbytes_per_sec": 0, 00:27:49.063 "r_mbytes_per_sec": 0, 00:27:49.063 "w_mbytes_per_sec": 0 00:27:49.063 }, 00:27:49.063 "claimed": true, 00:27:49.063 "claim_type": "exclusive_write", 00:27:49.063 "zoned": false, 00:27:49.063 "supported_io_types": { 00:27:49.063 "read": true, 00:27:49.063 "write": true, 00:27:49.063 "unmap": true, 00:27:49.063 "flush": true, 00:27:49.063 "reset": true, 00:27:49.063 "nvme_admin": false, 00:27:49.063 "nvme_io": false, 00:27:49.063 "nvme_io_md": false, 00:27:49.063 "write_zeroes": true, 00:27:49.063 "zcopy": true, 00:27:49.063 "get_zone_info": false, 00:27:49.063 "zone_management": false, 00:27:49.063 "zone_append": false, 00:27:49.063 "compare": false, 00:27:49.063 "compare_and_write": false, 00:27:49.063 "abort": true, 00:27:49.063 "seek_hole": false, 00:27:49.063 "seek_data": false, 00:27:49.063 "copy": true, 00:27:49.063 "nvme_iov_md": false 00:27:49.063 }, 00:27:49.063 "memory_domains": [ 00:27:49.063 { 00:27:49.063 "dma_device_id": "system", 00:27:49.063 "dma_device_type": 1 00:27:49.063 }, 00:27:49.063 { 00:27:49.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.063 "dma_device_type": 2 00:27:49.063 } 00:27:49.063 ], 00:27:49.063 "driver_specific": {} 00:27:49.063 }' 00:27:49.063 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:49.063 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:49.330 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:49.330 11:39:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:49.330 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:49.330 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:49.330 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:49.330 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:49.330 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:49.330 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:49.588 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:49.588 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:49.588 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:49.588 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:27:49.588 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:49.846 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:49.846 "name": "BaseBdev3", 00:27:49.846 "aliases": [ 00:27:49.846 "ac6c52e2-f6db-48db-9604-6657443598f2" 00:27:49.846 ], 00:27:49.846 "product_name": "Malloc disk", 00:27:49.846 "block_size": 512, 00:27:49.846 "num_blocks": 65536, 00:27:49.846 "uuid": "ac6c52e2-f6db-48db-9604-6657443598f2", 00:27:49.846 "assigned_rate_limits": { 00:27:49.846 "rw_ios_per_sec": 0, 00:27:49.846 "rw_mbytes_per_sec": 0, 00:27:49.846 "r_mbytes_per_sec": 0, 00:27:49.846 "w_mbytes_per_sec": 0 00:27:49.846 }, 00:27:49.846 "claimed": true, 00:27:49.846 "claim_type": "exclusive_write", 00:27:49.846 "zoned": false, 00:27:49.846 "supported_io_types": { 00:27:49.846 "read": true, 00:27:49.846 "write": true, 00:27:49.846 "unmap": true, 00:27:49.846 "flush": true, 00:27:49.846 "reset": true, 00:27:49.846 "nvme_admin": false, 00:27:49.846 "nvme_io": false, 00:27:49.846 "nvme_io_md": false, 00:27:49.846 "write_zeroes": true, 00:27:49.846 "zcopy": true, 00:27:49.846 "get_zone_info": false, 00:27:49.846 "zone_management": false, 00:27:49.846 "zone_append": false, 00:27:49.846 "compare": false, 00:27:49.846 "compare_and_write": false, 00:27:49.846 "abort": true, 00:27:49.846 "seek_hole": false, 00:27:49.846 "seek_data": false, 00:27:49.846 "copy": true, 00:27:49.846 "nvme_iov_md": false 00:27:49.846 }, 00:27:49.846 "memory_domains": [ 00:27:49.846 { 00:27:49.846 "dma_device_id": "system", 00:27:49.846 "dma_device_type": 1 00:27:49.846 }, 00:27:49.846 { 00:27:49.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.846 "dma_device_type": 2 00:27:49.846 } 00:27:49.846 ], 00:27:49.846 "driver_specific": {} 00:27:49.846 }' 00:27:49.846 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:49.846 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:49.846 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:49.846 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:27:50.104 11:39:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:50.362 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:50.362 "name": "BaseBdev4", 00:27:50.362 "aliases": [ 00:27:50.362 "8c55c605-93ba-4e15-8f89-e64c064a231b" 00:27:50.362 ], 00:27:50.362 "product_name": "Malloc disk", 00:27:50.362 "block_size": 512, 00:27:50.362 "num_blocks": 65536, 00:27:50.362 "uuid": "8c55c605-93ba-4e15-8f89-e64c064a231b", 00:27:50.362 "assigned_rate_limits": { 00:27:50.362 "rw_ios_per_sec": 0, 00:27:50.362 "rw_mbytes_per_sec": 0, 00:27:50.362 "r_mbytes_per_sec": 0, 00:27:50.362 "w_mbytes_per_sec": 0 00:27:50.362 }, 00:27:50.362 "claimed": true, 00:27:50.362 "claim_type": "exclusive_write", 00:27:50.362 "zoned": false, 00:27:50.362 "supported_io_types": { 00:27:50.362 "read": true, 00:27:50.362 "write": true, 00:27:50.362 "unmap": true, 00:27:50.362 "flush": true, 00:27:50.362 "reset": true, 00:27:50.362 "nvme_admin": false, 00:27:50.362 "nvme_io": false, 00:27:50.362 "nvme_io_md": false, 00:27:50.362 "write_zeroes": true, 00:27:50.362 "zcopy": true, 00:27:50.362 "get_zone_info": false, 00:27:50.362 "zone_management": false, 00:27:50.362 "zone_append": false, 00:27:50.362 "compare": false, 00:27:50.362 "compare_and_write": false, 00:27:50.362 "abort": true, 00:27:50.362 "seek_hole": false, 00:27:50.362 "seek_data": false, 00:27:50.362 "copy": true, 00:27:50.362 "nvme_iov_md": false 00:27:50.362 }, 00:27:50.362 "memory_domains": [ 00:27:50.362 { 00:27:50.362 "dma_device_id": "system", 00:27:50.362 "dma_device_type": 1 00:27:50.362 }, 00:27:50.362 { 00:27:50.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:50.362 "dma_device_type": 2 00:27:50.362 } 00:27:50.362 ], 00:27:50.362 "driver_specific": {} 00:27:50.362 }' 00:27:50.362 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:50.620 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:50.620 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:27:50.620 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:50.620 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:50.620 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:27:50.621 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.621 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:50.621 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:27:50.621 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.878 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:50.878 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:27:50.878 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:51.136 [2024-07-25 11:39:06.870005] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.136 11:39:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:51.700 11:39:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:51.700 "name": "Existed_Raid", 00:27:51.700 "uuid": "30e9a926-8fe3-4a24-92a9-1bd1b25c8b74", 00:27:51.700 "strip_size_kb": 64, 00:27:51.700 "state": "online", 00:27:51.700 "raid_level": "raid5f", 00:27:51.700 "superblock": false, 00:27:51.700 "num_base_bdevs": 4, 00:27:51.700 "num_base_bdevs_discovered": 3, 00:27:51.700 "num_base_bdevs_operational": 3, 00:27:51.700 "base_bdevs_list": [ 00:27:51.700 { 00:27:51.700 "name": null, 00:27:51.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.701 "is_configured": false, 00:27:51.701 "data_offset": 0, 00:27:51.701 "data_size": 65536 00:27:51.701 }, 00:27:51.701 { 00:27:51.701 "name": "BaseBdev2", 00:27:51.701 "uuid": "f65bcd54-a314-4d3d-8dc8-e53e7e9aa649", 00:27:51.701 "is_configured": true, 00:27:51.701 "data_offset": 0, 00:27:51.701 "data_size": 65536 00:27:51.701 }, 00:27:51.701 { 00:27:51.701 "name": "BaseBdev3", 00:27:51.701 "uuid": "ac6c52e2-f6db-48db-9604-6657443598f2", 00:27:51.701 "is_configured": true, 00:27:51.701 "data_offset": 0, 00:27:51.701 "data_size": 65536 00:27:51.701 }, 00:27:51.701 { 00:27:51.701 "name": "BaseBdev4", 00:27:51.701 "uuid": "8c55c605-93ba-4e15-8f89-e64c064a231b", 00:27:51.701 "is_configured": true, 00:27:51.701 "data_offset": 0, 00:27:51.701 "data_size": 65536 00:27:51.701 } 00:27:51.701 ] 00:27:51.701 }' 00:27:51.701 11:39:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:51.701 11:39:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.267 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:52.267 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:52.267 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.267 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:52.525 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:52.525 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:52.525 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:52.783 [2024-07-25 11:39:08.633655] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:52.783 [2024-07-25 11:39:08.633804] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:53.056 [2024-07-25 11:39:08.721973] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:53.057 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:53.057 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:53.057 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.057 11:39:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:53.316 11:39:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:53.316 11:39:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:53.316 11:39:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:53.574 [2024-07-25 11:39:09.270220] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:53.574 11:39:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:53.574 11:39:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:53.574 11:39:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.574 11:39:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:53.832 11:39:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:53.832 11:39:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:53.832 11:39:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:54.398 [2024-07-25 11:39:09.992936] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:54.398 [2024-07-25 11:39:09.993051] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:54.398 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:54.398 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:54.398 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:54.398 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.655 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:54.655 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:54.655 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:27:54.655 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:27:54.655 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:54.655 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:54.913 BaseBdev2 00:27:54.913 11:39:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:27:54.913 11:39:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:54.913 11:39:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:54.913 11:39:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:54.913 11:39:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:54.913 11:39:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:54.913 11:39:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:55.477 11:39:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:55.735 [ 00:27:55.735 { 00:27:55.735 "name": "BaseBdev2", 00:27:55.735 "aliases": [ 00:27:55.735 "207c4cba-5365-49a7-97e5-a9ee519f87d1" 00:27:55.735 ], 00:27:55.735 "product_name": "Malloc disk", 00:27:55.735 "block_size": 512, 00:27:55.735 "num_blocks": 65536, 00:27:55.735 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:27:55.735 "assigned_rate_limits": { 00:27:55.735 "rw_ios_per_sec": 0, 00:27:55.735 "rw_mbytes_per_sec": 0, 00:27:55.735 "r_mbytes_per_sec": 0, 00:27:55.735 "w_mbytes_per_sec": 0 00:27:55.735 }, 00:27:55.735 "claimed": false, 00:27:55.735 "zoned": false, 00:27:55.735 "supported_io_types": { 00:27:55.735 "read": true, 00:27:55.735 "write": true, 00:27:55.735 "unmap": true, 00:27:55.735 "flush": true, 00:27:55.735 "reset": true, 00:27:55.735 "nvme_admin": false, 00:27:55.735 "nvme_io": false, 00:27:55.735 "nvme_io_md": false, 00:27:55.735 "write_zeroes": true, 00:27:55.735 "zcopy": true, 00:27:55.735 "get_zone_info": false, 00:27:55.735 "zone_management": false, 00:27:55.735 "zone_append": false, 00:27:55.735 "compare": false, 00:27:55.735 "compare_and_write": false, 00:27:55.735 "abort": true, 00:27:55.735 "seek_hole": false, 00:27:55.735 "seek_data": false, 00:27:55.735 "copy": true, 00:27:55.735 "nvme_iov_md": false 00:27:55.735 }, 00:27:55.735 "memory_domains": [ 00:27:55.735 { 00:27:55.735 "dma_device_id": "system", 00:27:55.735 "dma_device_type": 1 00:27:55.735 }, 00:27:55.735 { 00:27:55.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:55.735 "dma_device_type": 2 00:27:55.735 } 00:27:55.735 ], 00:27:55.735 "driver_specific": {} 00:27:55.735 } 00:27:55.735 ] 00:27:55.735 11:39:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:55.735 11:39:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:55.735 11:39:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:55.735 11:39:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:55.993 BaseBdev3 00:27:56.250 11:39:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:27:56.250 11:39:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:27:56.250 11:39:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:56.250 11:39:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:56.250 11:39:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:56.250 11:39:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:56.250 11:39:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:56.508 11:39:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:56.767 [ 00:27:56.767 { 00:27:56.767 "name": "BaseBdev3", 00:27:56.767 "aliases": [ 00:27:56.767 "91d054f0-08f5-431f-b273-ded3d39d8283" 00:27:56.767 ], 00:27:56.767 "product_name": "Malloc disk", 00:27:56.767 "block_size": 512, 00:27:56.767 "num_blocks": 65536, 00:27:56.767 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:27:56.767 "assigned_rate_limits": { 00:27:56.767 "rw_ios_per_sec": 0, 00:27:56.767 "rw_mbytes_per_sec": 0, 00:27:56.767 "r_mbytes_per_sec": 0, 00:27:56.767 "w_mbytes_per_sec": 0 00:27:56.767 }, 00:27:56.767 "claimed": false, 00:27:56.767 "zoned": false, 00:27:56.767 "supported_io_types": { 00:27:56.767 "read": true, 00:27:56.767 "write": true, 00:27:56.767 "unmap": true, 00:27:56.767 "flush": true, 00:27:56.767 "reset": true, 00:27:56.767 "nvme_admin": false, 00:27:56.767 "nvme_io": false, 00:27:56.767 "nvme_io_md": false, 00:27:56.767 "write_zeroes": true, 00:27:56.767 "zcopy": true, 00:27:56.767 "get_zone_info": false, 00:27:56.767 "zone_management": false, 00:27:56.767 "zone_append": false, 00:27:56.767 "compare": false, 00:27:56.767 "compare_and_write": false, 00:27:56.767 "abort": true, 00:27:56.767 "seek_hole": false, 00:27:56.767 "seek_data": false, 00:27:56.767 "copy": true, 00:27:56.767 "nvme_iov_md": false 00:27:56.767 }, 00:27:56.767 "memory_domains": [ 00:27:56.767 { 00:27:56.767 "dma_device_id": "system", 00:27:56.767 "dma_device_type": 1 00:27:56.767 }, 00:27:56.767 { 00:27:56.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:56.767 "dma_device_type": 2 00:27:56.767 } 00:27:56.767 ], 00:27:56.767 "driver_specific": {} 00:27:56.767 } 00:27:56.767 ] 00:27:56.767 11:39:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:56.767 11:39:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:56.767 11:39:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:56.767 11:39:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:57.026 BaseBdev4 00:27:57.026 11:39:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:27:57.026 11:39:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:27:57.026 11:39:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:57.026 11:39:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:57.026 11:39:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:57.026 11:39:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:57.026 11:39:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:57.284 11:39:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:57.542 [ 00:27:57.542 { 00:27:57.542 "name": "BaseBdev4", 00:27:57.542 "aliases": [ 00:27:57.542 "33a2b638-807c-4db5-bffe-31dbc7341edb" 00:27:57.542 ], 00:27:57.542 "product_name": "Malloc disk", 00:27:57.542 "block_size": 512, 00:27:57.542 "num_blocks": 65536, 00:27:57.542 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:27:57.542 "assigned_rate_limits": { 00:27:57.542 "rw_ios_per_sec": 0, 00:27:57.542 "rw_mbytes_per_sec": 0, 00:27:57.542 "r_mbytes_per_sec": 0, 00:27:57.542 "w_mbytes_per_sec": 0 00:27:57.542 }, 00:27:57.542 "claimed": false, 00:27:57.542 "zoned": false, 00:27:57.542 "supported_io_types": { 00:27:57.542 "read": true, 00:27:57.542 "write": true, 00:27:57.542 "unmap": true, 00:27:57.542 "flush": true, 00:27:57.542 "reset": true, 00:27:57.542 "nvme_admin": false, 00:27:57.542 "nvme_io": false, 00:27:57.542 "nvme_io_md": false, 00:27:57.542 "write_zeroes": true, 00:27:57.542 "zcopy": true, 00:27:57.542 "get_zone_info": false, 00:27:57.542 "zone_management": false, 00:27:57.542 "zone_append": false, 00:27:57.542 "compare": false, 00:27:57.542 "compare_and_write": false, 00:27:57.542 "abort": true, 00:27:57.542 "seek_hole": false, 00:27:57.542 "seek_data": false, 00:27:57.542 "copy": true, 00:27:57.542 "nvme_iov_md": false 00:27:57.542 }, 00:27:57.542 "memory_domains": [ 00:27:57.542 { 00:27:57.542 "dma_device_id": "system", 00:27:57.542 "dma_device_type": 1 00:27:57.542 }, 00:27:57.542 { 00:27:57.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.542 "dma_device_type": 2 00:27:57.542 } 00:27:57.542 ], 00:27:57.542 "driver_specific": {} 00:27:57.542 } 00:27:57.542 ] 00:27:57.542 11:39:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:57.542 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:27:57.542 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:27:57.542 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:58.108 [2024-07-25 11:39:13.688165] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:58.108 [2024-07-25 11:39:13.688242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:58.108 [2024-07-25 11:39:13.688278] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:58.108 [2024-07-25 11:39:13.690631] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:58.108 [2024-07-25 11:39:13.690708] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.108 11:39:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:58.366 11:39:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:58.366 "name": "Existed_Raid", 00:27:58.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.366 "strip_size_kb": 64, 00:27:58.366 "state": "configuring", 00:27:58.366 "raid_level": "raid5f", 00:27:58.366 "superblock": false, 00:27:58.366 "num_base_bdevs": 4, 00:27:58.366 "num_base_bdevs_discovered": 3, 00:27:58.366 "num_base_bdevs_operational": 4, 00:27:58.366 "base_bdevs_list": [ 00:27:58.366 { 00:27:58.366 "name": "BaseBdev1", 00:27:58.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.366 "is_configured": false, 00:27:58.366 "data_offset": 0, 00:27:58.366 "data_size": 0 00:27:58.366 }, 00:27:58.366 { 00:27:58.366 "name": "BaseBdev2", 00:27:58.366 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:27:58.366 "is_configured": true, 00:27:58.366 "data_offset": 0, 00:27:58.366 "data_size": 65536 00:27:58.366 }, 00:27:58.366 { 00:27:58.366 "name": "BaseBdev3", 00:27:58.366 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:27:58.366 "is_configured": true, 00:27:58.366 "data_offset": 0, 00:27:58.366 "data_size": 65536 00:27:58.366 }, 00:27:58.366 { 00:27:58.366 "name": "BaseBdev4", 00:27:58.366 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:27:58.366 "is_configured": true, 00:27:58.366 "data_offset": 0, 00:27:58.366 "data_size": 65536 00:27:58.366 } 00:27:58.366 ] 00:27:58.366 }' 00:27:58.366 11:39:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:58.366 11:39:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.931 11:39:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:27:59.497 [2024-07-25 11:39:15.136548] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.497 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:59.779 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:59.779 "name": "Existed_Raid", 00:27:59.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.779 "strip_size_kb": 64, 00:27:59.779 "state": "configuring", 00:27:59.779 "raid_level": "raid5f", 00:27:59.779 "superblock": false, 00:27:59.779 "num_base_bdevs": 4, 00:27:59.779 "num_base_bdevs_discovered": 2, 00:27:59.779 "num_base_bdevs_operational": 4, 00:27:59.779 "base_bdevs_list": [ 00:27:59.779 { 00:27:59.779 "name": "BaseBdev1", 00:27:59.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.779 "is_configured": false, 00:27:59.779 "data_offset": 0, 00:27:59.779 "data_size": 0 00:27:59.779 }, 00:27:59.779 { 00:27:59.779 "name": null, 00:27:59.779 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:27:59.779 "is_configured": false, 00:27:59.779 "data_offset": 0, 00:27:59.779 "data_size": 65536 00:27:59.779 }, 00:27:59.779 { 00:27:59.779 "name": "BaseBdev3", 00:27:59.779 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:27:59.779 "is_configured": true, 00:27:59.779 "data_offset": 0, 00:27:59.779 "data_size": 65536 00:27:59.779 }, 00:27:59.779 { 00:27:59.779 "name": "BaseBdev4", 00:27:59.779 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:27:59.779 "is_configured": true, 00:27:59.779 "data_offset": 0, 00:27:59.779 "data_size": 65536 00:27:59.779 } 00:27:59.779 ] 00:27:59.779 }' 00:27:59.779 11:39:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:59.779 11:39:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.344 11:39:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.344 11:39:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:00.601 11:39:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:28:00.601 11:39:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:01.188 [2024-07-25 11:39:16.756248] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:01.188 BaseBdev1 00:28:01.188 11:39:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:28:01.188 11:39:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:01.188 11:39:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:01.188 11:39:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:01.188 11:39:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:01.188 11:39:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:01.188 11:39:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:01.188 11:39:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:01.481 [ 00:28:01.481 { 00:28:01.481 "name": "BaseBdev1", 00:28:01.481 "aliases": [ 00:28:01.481 "5b315a59-852f-4a47-95cc-fa9dee699382" 00:28:01.481 ], 00:28:01.481 "product_name": "Malloc disk", 00:28:01.481 "block_size": 512, 00:28:01.481 "num_blocks": 65536, 00:28:01.481 "uuid": "5b315a59-852f-4a47-95cc-fa9dee699382", 00:28:01.481 "assigned_rate_limits": { 00:28:01.481 "rw_ios_per_sec": 0, 00:28:01.481 "rw_mbytes_per_sec": 0, 00:28:01.481 "r_mbytes_per_sec": 0, 00:28:01.481 "w_mbytes_per_sec": 0 00:28:01.481 }, 00:28:01.481 "claimed": true, 00:28:01.481 "claim_type": "exclusive_write", 00:28:01.481 "zoned": false, 00:28:01.481 "supported_io_types": { 00:28:01.481 "read": true, 00:28:01.481 "write": true, 00:28:01.481 "unmap": true, 00:28:01.481 "flush": true, 00:28:01.481 "reset": true, 00:28:01.481 "nvme_admin": false, 00:28:01.481 "nvme_io": false, 00:28:01.481 "nvme_io_md": false, 00:28:01.481 "write_zeroes": true, 00:28:01.481 "zcopy": true, 00:28:01.481 "get_zone_info": false, 00:28:01.481 "zone_management": false, 00:28:01.481 "zone_append": false, 00:28:01.481 "compare": false, 00:28:01.481 "compare_and_write": false, 00:28:01.481 "abort": true, 00:28:01.481 "seek_hole": false, 00:28:01.481 "seek_data": false, 00:28:01.481 "copy": true, 00:28:01.481 "nvme_iov_md": false 00:28:01.481 }, 00:28:01.481 "memory_domains": [ 00:28:01.481 { 00:28:01.481 "dma_device_id": "system", 00:28:01.481 "dma_device_type": 1 00:28:01.481 }, 00:28:01.481 { 00:28:01.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:01.481 "dma_device_type": 2 00:28:01.481 } 00:28:01.481 ], 00:28:01.481 "driver_specific": {} 00:28:01.481 } 00:28:01.481 ] 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:01.481 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.048 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:02.048 "name": "Existed_Raid", 00:28:02.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.048 "strip_size_kb": 64, 00:28:02.048 "state": "configuring", 00:28:02.048 "raid_level": "raid5f", 00:28:02.048 "superblock": false, 00:28:02.048 "num_base_bdevs": 4, 00:28:02.048 "num_base_bdevs_discovered": 3, 00:28:02.048 "num_base_bdevs_operational": 4, 00:28:02.048 "base_bdevs_list": [ 00:28:02.048 { 00:28:02.048 "name": "BaseBdev1", 00:28:02.048 "uuid": "5b315a59-852f-4a47-95cc-fa9dee699382", 00:28:02.048 "is_configured": true, 00:28:02.048 "data_offset": 0, 00:28:02.048 "data_size": 65536 00:28:02.048 }, 00:28:02.048 { 00:28:02.048 "name": null, 00:28:02.048 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:28:02.048 "is_configured": false, 00:28:02.048 "data_offset": 0, 00:28:02.048 "data_size": 65536 00:28:02.048 }, 00:28:02.048 { 00:28:02.048 "name": "BaseBdev3", 00:28:02.048 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:28:02.048 "is_configured": true, 00:28:02.048 "data_offset": 0, 00:28:02.048 "data_size": 65536 00:28:02.048 }, 00:28:02.048 { 00:28:02.048 "name": "BaseBdev4", 00:28:02.048 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:28:02.048 "is_configured": true, 00:28:02.048 "data_offset": 0, 00:28:02.048 "data_size": 65536 00:28:02.048 } 00:28:02.048 ] 00:28:02.048 }' 00:28:02.048 11:39:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:02.048 11:39:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.614 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.614 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:02.871 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:28:02.871 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:03.129 [2024-07-25 11:39:18.897077] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.129 11:39:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.386 11:39:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.386 "name": "Existed_Raid", 00:28:03.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.386 "strip_size_kb": 64, 00:28:03.386 "state": "configuring", 00:28:03.386 "raid_level": "raid5f", 00:28:03.386 "superblock": false, 00:28:03.386 "num_base_bdevs": 4, 00:28:03.386 "num_base_bdevs_discovered": 2, 00:28:03.386 "num_base_bdevs_operational": 4, 00:28:03.386 "base_bdevs_list": [ 00:28:03.386 { 00:28:03.386 "name": "BaseBdev1", 00:28:03.386 "uuid": "5b315a59-852f-4a47-95cc-fa9dee699382", 00:28:03.386 "is_configured": true, 00:28:03.386 "data_offset": 0, 00:28:03.386 "data_size": 65536 00:28:03.386 }, 00:28:03.386 { 00:28:03.386 "name": null, 00:28:03.386 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:28:03.386 "is_configured": false, 00:28:03.386 "data_offset": 0, 00:28:03.386 "data_size": 65536 00:28:03.386 }, 00:28:03.386 { 00:28:03.386 "name": null, 00:28:03.386 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:28:03.386 "is_configured": false, 00:28:03.386 "data_offset": 0, 00:28:03.386 "data_size": 65536 00:28:03.386 }, 00:28:03.386 { 00:28:03.386 "name": "BaseBdev4", 00:28:03.386 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:28:03.386 "is_configured": true, 00:28:03.386 "data_offset": 0, 00:28:03.386 "data_size": 65536 00:28:03.386 } 00:28:03.386 ] 00:28:03.386 }' 00:28:03.386 11:39:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.386 11:39:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.318 11:39:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.318 11:39:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:04.575 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:28:04.575 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:04.833 [2024-07-25 11:39:20.481534] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.833 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.091 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.091 "name": "Existed_Raid", 00:28:05.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.091 "strip_size_kb": 64, 00:28:05.091 "state": "configuring", 00:28:05.091 "raid_level": "raid5f", 00:28:05.091 "superblock": false, 00:28:05.091 "num_base_bdevs": 4, 00:28:05.091 "num_base_bdevs_discovered": 3, 00:28:05.091 "num_base_bdevs_operational": 4, 00:28:05.091 "base_bdevs_list": [ 00:28:05.091 { 00:28:05.091 "name": "BaseBdev1", 00:28:05.091 "uuid": "5b315a59-852f-4a47-95cc-fa9dee699382", 00:28:05.091 "is_configured": true, 00:28:05.091 "data_offset": 0, 00:28:05.091 "data_size": 65536 00:28:05.091 }, 00:28:05.091 { 00:28:05.091 "name": null, 00:28:05.091 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:28:05.091 "is_configured": false, 00:28:05.091 "data_offset": 0, 00:28:05.091 "data_size": 65536 00:28:05.091 }, 00:28:05.091 { 00:28:05.091 "name": "BaseBdev3", 00:28:05.091 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:28:05.091 "is_configured": true, 00:28:05.091 "data_offset": 0, 00:28:05.091 "data_size": 65536 00:28:05.091 }, 00:28:05.091 { 00:28:05.091 "name": "BaseBdev4", 00:28:05.091 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:28:05.091 "is_configured": true, 00:28:05.091 "data_offset": 0, 00:28:05.091 "data_size": 65536 00:28:05.091 } 00:28:05.091 ] 00:28:05.091 }' 00:28:05.091 11:39:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.091 11:39:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.025 11:39:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:06.025 11:39:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.025 11:39:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:28:06.025 11:39:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:06.282 [2024-07-25 11:39:22.078341] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.541 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.799 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:06.799 "name": "Existed_Raid", 00:28:06.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.799 "strip_size_kb": 64, 00:28:06.799 "state": "configuring", 00:28:06.799 "raid_level": "raid5f", 00:28:06.799 "superblock": false, 00:28:06.799 "num_base_bdevs": 4, 00:28:06.799 "num_base_bdevs_discovered": 2, 00:28:06.799 "num_base_bdevs_operational": 4, 00:28:06.799 "base_bdevs_list": [ 00:28:06.799 { 00:28:06.799 "name": null, 00:28:06.799 "uuid": "5b315a59-852f-4a47-95cc-fa9dee699382", 00:28:06.799 "is_configured": false, 00:28:06.799 "data_offset": 0, 00:28:06.799 "data_size": 65536 00:28:06.799 }, 00:28:06.799 { 00:28:06.799 "name": null, 00:28:06.799 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:28:06.799 "is_configured": false, 00:28:06.799 "data_offset": 0, 00:28:06.799 "data_size": 65536 00:28:06.799 }, 00:28:06.799 { 00:28:06.799 "name": "BaseBdev3", 00:28:06.799 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:28:06.799 "is_configured": true, 00:28:06.799 "data_offset": 0, 00:28:06.799 "data_size": 65536 00:28:06.799 }, 00:28:06.799 { 00:28:06.799 "name": "BaseBdev4", 00:28:06.799 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:28:06.799 "is_configured": true, 00:28:06.799 "data_offset": 0, 00:28:06.799 "data_size": 65536 00:28:06.799 } 00:28:06.799 ] 00:28:06.799 }' 00:28:06.799 11:39:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:06.799 11:39:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.363 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.363 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:07.928 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:28:07.928 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:07.929 [2024-07-25 11:39:23.764321] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.929 11:39:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.495 11:39:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.495 "name": "Existed_Raid", 00:28:08.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.495 "strip_size_kb": 64, 00:28:08.495 "state": "configuring", 00:28:08.495 "raid_level": "raid5f", 00:28:08.495 "superblock": false, 00:28:08.495 "num_base_bdevs": 4, 00:28:08.495 "num_base_bdevs_discovered": 3, 00:28:08.495 "num_base_bdevs_operational": 4, 00:28:08.495 "base_bdevs_list": [ 00:28:08.495 { 00:28:08.495 "name": null, 00:28:08.495 "uuid": "5b315a59-852f-4a47-95cc-fa9dee699382", 00:28:08.495 "is_configured": false, 00:28:08.495 "data_offset": 0, 00:28:08.495 "data_size": 65536 00:28:08.495 }, 00:28:08.495 { 00:28:08.495 "name": "BaseBdev2", 00:28:08.495 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:28:08.495 "is_configured": true, 00:28:08.495 "data_offset": 0, 00:28:08.495 "data_size": 65536 00:28:08.495 }, 00:28:08.495 { 00:28:08.495 "name": "BaseBdev3", 00:28:08.495 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:28:08.495 "is_configured": true, 00:28:08.495 "data_offset": 0, 00:28:08.495 "data_size": 65536 00:28:08.495 }, 00:28:08.495 { 00:28:08.495 "name": "BaseBdev4", 00:28:08.495 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:28:08.495 "is_configured": true, 00:28:08.495 "data_offset": 0, 00:28:08.495 "data_size": 65536 00:28:08.495 } 00:28:08.495 ] 00:28:08.495 }' 00:28:08.495 11:39:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.495 11:39:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.061 11:39:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:09.061 11:39:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.319 11:39:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:28:09.319 11:39:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:09.319 11:39:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.576 11:39:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5b315a59-852f-4a47-95cc-fa9dee699382 00:28:09.834 [2024-07-25 11:39:25.649434] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:09.834 [2024-07-25 11:39:25.649505] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:09.834 [2024-07-25 11:39:25.649532] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:28:09.834 [2024-07-25 11:39:25.649932] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:09.834 NewBaseBdev 00:28:09.834 [2024-07-25 11:39:25.656354] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:09.834 [2024-07-25 11:39:25.656378] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:28:09.834 [2024-07-25 11:39:25.656792] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.834 11:39:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:28:09.834 11:39:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:28:09.834 11:39:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:09.834 11:39:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:09.834 11:39:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:09.834 11:39:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:09.834 11:39:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:10.091 11:39:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:10.350 [ 00:28:10.350 { 00:28:10.350 "name": "NewBaseBdev", 00:28:10.350 "aliases": [ 00:28:10.350 "5b315a59-852f-4a47-95cc-fa9dee699382" 00:28:10.350 ], 00:28:10.350 "product_name": "Malloc disk", 00:28:10.350 "block_size": 512, 00:28:10.350 "num_blocks": 65536, 00:28:10.350 "uuid": "5b315a59-852f-4a47-95cc-fa9dee699382", 00:28:10.350 "assigned_rate_limits": { 00:28:10.350 "rw_ios_per_sec": 0, 00:28:10.350 "rw_mbytes_per_sec": 0, 00:28:10.350 "r_mbytes_per_sec": 0, 00:28:10.350 "w_mbytes_per_sec": 0 00:28:10.350 }, 00:28:10.350 "claimed": true, 00:28:10.350 "claim_type": "exclusive_write", 00:28:10.350 "zoned": false, 00:28:10.350 "supported_io_types": { 00:28:10.350 "read": true, 00:28:10.350 "write": true, 00:28:10.350 "unmap": true, 00:28:10.350 "flush": true, 00:28:10.350 "reset": true, 00:28:10.350 "nvme_admin": false, 00:28:10.350 "nvme_io": false, 00:28:10.350 "nvme_io_md": false, 00:28:10.350 "write_zeroes": true, 00:28:10.350 "zcopy": true, 00:28:10.350 "get_zone_info": false, 00:28:10.350 "zone_management": false, 00:28:10.350 "zone_append": false, 00:28:10.350 "compare": false, 00:28:10.350 "compare_and_write": false, 00:28:10.350 "abort": true, 00:28:10.350 "seek_hole": false, 00:28:10.350 "seek_data": false, 00:28:10.350 "copy": true, 00:28:10.350 "nvme_iov_md": false 00:28:10.350 }, 00:28:10.350 "memory_domains": [ 00:28:10.350 { 00:28:10.350 "dma_device_id": "system", 00:28:10.350 "dma_device_type": 1 00:28:10.350 }, 00:28:10.350 { 00:28:10.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.350 "dma_device_type": 2 00:28:10.350 } 00:28:10.350 ], 00:28:10.350 "driver_specific": {} 00:28:10.350 } 00:28:10.350 ] 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.350 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:10.609 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.609 "name": "Existed_Raid", 00:28:10.609 "uuid": "4d8b93ab-5a2a-4ef9-8ad5-0ec4f20e1b30", 00:28:10.609 "strip_size_kb": 64, 00:28:10.609 "state": "online", 00:28:10.609 "raid_level": "raid5f", 00:28:10.609 "superblock": false, 00:28:10.609 "num_base_bdevs": 4, 00:28:10.609 "num_base_bdevs_discovered": 4, 00:28:10.609 "num_base_bdevs_operational": 4, 00:28:10.609 "base_bdevs_list": [ 00:28:10.609 { 00:28:10.609 "name": "NewBaseBdev", 00:28:10.609 "uuid": "5b315a59-852f-4a47-95cc-fa9dee699382", 00:28:10.609 "is_configured": true, 00:28:10.609 "data_offset": 0, 00:28:10.609 "data_size": 65536 00:28:10.609 }, 00:28:10.609 { 00:28:10.609 "name": "BaseBdev2", 00:28:10.609 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:28:10.609 "is_configured": true, 00:28:10.609 "data_offset": 0, 00:28:10.609 "data_size": 65536 00:28:10.609 }, 00:28:10.609 { 00:28:10.609 "name": "BaseBdev3", 00:28:10.609 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:28:10.609 "is_configured": true, 00:28:10.609 "data_offset": 0, 00:28:10.609 "data_size": 65536 00:28:10.609 }, 00:28:10.609 { 00:28:10.609 "name": "BaseBdev4", 00:28:10.609 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:28:10.609 "is_configured": true, 00:28:10.609 "data_offset": 0, 00:28:10.609 "data_size": 65536 00:28:10.609 } 00:28:10.609 ] 00:28:10.609 }' 00:28:10.609 11:39:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.609 11:39:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.175 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:28:11.175 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:11.175 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:11.175 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:11.175 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:11.175 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:11.175 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:11.175 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:11.433 [2024-07-25 11:39:27.264755] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.433 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:11.433 "name": "Existed_Raid", 00:28:11.433 "aliases": [ 00:28:11.433 "4d8b93ab-5a2a-4ef9-8ad5-0ec4f20e1b30" 00:28:11.433 ], 00:28:11.433 "product_name": "Raid Volume", 00:28:11.433 "block_size": 512, 00:28:11.433 "num_blocks": 196608, 00:28:11.433 "uuid": "4d8b93ab-5a2a-4ef9-8ad5-0ec4f20e1b30", 00:28:11.433 "assigned_rate_limits": { 00:28:11.433 "rw_ios_per_sec": 0, 00:28:11.433 "rw_mbytes_per_sec": 0, 00:28:11.433 "r_mbytes_per_sec": 0, 00:28:11.433 "w_mbytes_per_sec": 0 00:28:11.433 }, 00:28:11.433 "claimed": false, 00:28:11.433 "zoned": false, 00:28:11.433 "supported_io_types": { 00:28:11.433 "read": true, 00:28:11.433 "write": true, 00:28:11.433 "unmap": false, 00:28:11.433 "flush": false, 00:28:11.433 "reset": true, 00:28:11.433 "nvme_admin": false, 00:28:11.433 "nvme_io": false, 00:28:11.433 "nvme_io_md": false, 00:28:11.433 "write_zeroes": true, 00:28:11.433 "zcopy": false, 00:28:11.433 "get_zone_info": false, 00:28:11.433 "zone_management": false, 00:28:11.433 "zone_append": false, 00:28:11.433 "compare": false, 00:28:11.433 "compare_and_write": false, 00:28:11.433 "abort": false, 00:28:11.433 "seek_hole": false, 00:28:11.433 "seek_data": false, 00:28:11.433 "copy": false, 00:28:11.433 "nvme_iov_md": false 00:28:11.433 }, 00:28:11.433 "driver_specific": { 00:28:11.433 "raid": { 00:28:11.433 "uuid": "4d8b93ab-5a2a-4ef9-8ad5-0ec4f20e1b30", 00:28:11.433 "strip_size_kb": 64, 00:28:11.433 "state": "online", 00:28:11.433 "raid_level": "raid5f", 00:28:11.433 "superblock": false, 00:28:11.433 "num_base_bdevs": 4, 00:28:11.433 "num_base_bdevs_discovered": 4, 00:28:11.433 "num_base_bdevs_operational": 4, 00:28:11.433 "base_bdevs_list": [ 00:28:11.433 { 00:28:11.433 "name": "NewBaseBdev", 00:28:11.433 "uuid": "5b315a59-852f-4a47-95cc-fa9dee699382", 00:28:11.433 "is_configured": true, 00:28:11.433 "data_offset": 0, 00:28:11.433 "data_size": 65536 00:28:11.433 }, 00:28:11.433 { 00:28:11.433 "name": "BaseBdev2", 00:28:11.433 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:28:11.433 "is_configured": true, 00:28:11.433 "data_offset": 0, 00:28:11.433 "data_size": 65536 00:28:11.433 }, 00:28:11.433 { 00:28:11.433 "name": "BaseBdev3", 00:28:11.433 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:28:11.433 "is_configured": true, 00:28:11.433 "data_offset": 0, 00:28:11.433 "data_size": 65536 00:28:11.433 }, 00:28:11.433 { 00:28:11.433 "name": "BaseBdev4", 00:28:11.433 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:28:11.433 "is_configured": true, 00:28:11.433 "data_offset": 0, 00:28:11.433 "data_size": 65536 00:28:11.433 } 00:28:11.433 ] 00:28:11.433 } 00:28:11.433 } 00:28:11.433 }' 00:28:11.433 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:11.690 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:28:11.690 BaseBdev2 00:28:11.690 BaseBdev3 00:28:11.690 BaseBdev4' 00:28:11.690 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:11.690 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:11.690 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:11.690 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:11.690 "name": "NewBaseBdev", 00:28:11.690 "aliases": [ 00:28:11.690 "5b315a59-852f-4a47-95cc-fa9dee699382" 00:28:11.690 ], 00:28:11.690 "product_name": "Malloc disk", 00:28:11.690 "block_size": 512, 00:28:11.690 "num_blocks": 65536, 00:28:11.690 "uuid": "5b315a59-852f-4a47-95cc-fa9dee699382", 00:28:11.690 "assigned_rate_limits": { 00:28:11.690 "rw_ios_per_sec": 0, 00:28:11.690 "rw_mbytes_per_sec": 0, 00:28:11.690 "r_mbytes_per_sec": 0, 00:28:11.690 "w_mbytes_per_sec": 0 00:28:11.690 }, 00:28:11.690 "claimed": true, 00:28:11.690 "claim_type": "exclusive_write", 00:28:11.690 "zoned": false, 00:28:11.690 "supported_io_types": { 00:28:11.690 "read": true, 00:28:11.690 "write": true, 00:28:11.690 "unmap": true, 00:28:11.690 "flush": true, 00:28:11.690 "reset": true, 00:28:11.690 "nvme_admin": false, 00:28:11.690 "nvme_io": false, 00:28:11.690 "nvme_io_md": false, 00:28:11.690 "write_zeroes": true, 00:28:11.690 "zcopy": true, 00:28:11.690 "get_zone_info": false, 00:28:11.690 "zone_management": false, 00:28:11.690 "zone_append": false, 00:28:11.690 "compare": false, 00:28:11.690 "compare_and_write": false, 00:28:11.690 "abort": true, 00:28:11.690 "seek_hole": false, 00:28:11.690 "seek_data": false, 00:28:11.690 "copy": true, 00:28:11.690 "nvme_iov_md": false 00:28:11.690 }, 00:28:11.690 "memory_domains": [ 00:28:11.690 { 00:28:11.691 "dma_device_id": "system", 00:28:11.691 "dma_device_type": 1 00:28:11.691 }, 00:28:11.691 { 00:28:11.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.691 "dma_device_type": 2 00:28:11.691 } 00:28:11.691 ], 00:28:11.691 "driver_specific": {} 00:28:11.691 }' 00:28:11.691 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:11.948 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:11.948 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:11.948 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.948 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:11.948 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:11.948 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:11.948 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.206 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:12.206 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.206 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.206 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:12.206 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:12.206 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:12.206 11:39:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:12.488 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:12.488 "name": "BaseBdev2", 00:28:12.488 "aliases": [ 00:28:12.488 "207c4cba-5365-49a7-97e5-a9ee519f87d1" 00:28:12.488 ], 00:28:12.488 "product_name": "Malloc disk", 00:28:12.488 "block_size": 512, 00:28:12.488 "num_blocks": 65536, 00:28:12.488 "uuid": "207c4cba-5365-49a7-97e5-a9ee519f87d1", 00:28:12.488 "assigned_rate_limits": { 00:28:12.488 "rw_ios_per_sec": 0, 00:28:12.488 "rw_mbytes_per_sec": 0, 00:28:12.488 "r_mbytes_per_sec": 0, 00:28:12.488 "w_mbytes_per_sec": 0 00:28:12.488 }, 00:28:12.488 "claimed": true, 00:28:12.488 "claim_type": "exclusive_write", 00:28:12.488 "zoned": false, 00:28:12.488 "supported_io_types": { 00:28:12.488 "read": true, 00:28:12.488 "write": true, 00:28:12.488 "unmap": true, 00:28:12.488 "flush": true, 00:28:12.488 "reset": true, 00:28:12.488 "nvme_admin": false, 00:28:12.488 "nvme_io": false, 00:28:12.488 "nvme_io_md": false, 00:28:12.488 "write_zeroes": true, 00:28:12.488 "zcopy": true, 00:28:12.488 "get_zone_info": false, 00:28:12.488 "zone_management": false, 00:28:12.488 "zone_append": false, 00:28:12.488 "compare": false, 00:28:12.488 "compare_and_write": false, 00:28:12.488 "abort": true, 00:28:12.488 "seek_hole": false, 00:28:12.488 "seek_data": false, 00:28:12.488 "copy": true, 00:28:12.488 "nvme_iov_md": false 00:28:12.488 }, 00:28:12.488 "memory_domains": [ 00:28:12.488 { 00:28:12.488 "dma_device_id": "system", 00:28:12.488 "dma_device_type": 1 00:28:12.488 }, 00:28:12.488 { 00:28:12.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:12.488 "dma_device_type": 2 00:28:12.488 } 00:28:12.488 ], 00:28:12.488 "driver_specific": {} 00:28:12.488 }' 00:28:12.488 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.488 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:12.488 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:12.488 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.488 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:12.759 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:12.759 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.759 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:12.759 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:12.759 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.759 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:12.759 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:12.759 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:12.759 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:12.759 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:13.017 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:13.017 "name": "BaseBdev3", 00:28:13.017 "aliases": [ 00:28:13.017 "91d054f0-08f5-431f-b273-ded3d39d8283" 00:28:13.017 ], 00:28:13.017 "product_name": "Malloc disk", 00:28:13.017 "block_size": 512, 00:28:13.017 "num_blocks": 65536, 00:28:13.017 "uuid": "91d054f0-08f5-431f-b273-ded3d39d8283", 00:28:13.017 "assigned_rate_limits": { 00:28:13.017 "rw_ios_per_sec": 0, 00:28:13.017 "rw_mbytes_per_sec": 0, 00:28:13.017 "r_mbytes_per_sec": 0, 00:28:13.017 "w_mbytes_per_sec": 0 00:28:13.017 }, 00:28:13.017 "claimed": true, 00:28:13.017 "claim_type": "exclusive_write", 00:28:13.017 "zoned": false, 00:28:13.017 "supported_io_types": { 00:28:13.017 "read": true, 00:28:13.017 "write": true, 00:28:13.017 "unmap": true, 00:28:13.017 "flush": true, 00:28:13.017 "reset": true, 00:28:13.017 "nvme_admin": false, 00:28:13.017 "nvme_io": false, 00:28:13.017 "nvme_io_md": false, 00:28:13.017 "write_zeroes": true, 00:28:13.017 "zcopy": true, 00:28:13.017 "get_zone_info": false, 00:28:13.017 "zone_management": false, 00:28:13.017 "zone_append": false, 00:28:13.017 "compare": false, 00:28:13.017 "compare_and_write": false, 00:28:13.017 "abort": true, 00:28:13.017 "seek_hole": false, 00:28:13.017 "seek_data": false, 00:28:13.017 "copy": true, 00:28:13.017 "nvme_iov_md": false 00:28:13.017 }, 00:28:13.017 "memory_domains": [ 00:28:13.017 { 00:28:13.017 "dma_device_id": "system", 00:28:13.017 "dma_device_type": 1 00:28:13.017 }, 00:28:13.017 { 00:28:13.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.017 "dma_device_type": 2 00:28:13.017 } 00:28:13.017 ], 00:28:13.017 "driver_specific": {} 00:28:13.017 }' 00:28:13.017 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:13.017 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:13.275 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:13.275 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:13.275 11:39:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:13.275 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:13.275 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:13.275 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:13.275 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:13.275 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:13.532 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:13.532 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:13.532 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:13.532 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:13.532 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:13.789 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:13.790 "name": "BaseBdev4", 00:28:13.790 "aliases": [ 00:28:13.790 "33a2b638-807c-4db5-bffe-31dbc7341edb" 00:28:13.790 ], 00:28:13.790 "product_name": "Malloc disk", 00:28:13.790 "block_size": 512, 00:28:13.790 "num_blocks": 65536, 00:28:13.790 "uuid": "33a2b638-807c-4db5-bffe-31dbc7341edb", 00:28:13.790 "assigned_rate_limits": { 00:28:13.790 "rw_ios_per_sec": 0, 00:28:13.790 "rw_mbytes_per_sec": 0, 00:28:13.790 "r_mbytes_per_sec": 0, 00:28:13.790 "w_mbytes_per_sec": 0 00:28:13.790 }, 00:28:13.790 "claimed": true, 00:28:13.790 "claim_type": "exclusive_write", 00:28:13.790 "zoned": false, 00:28:13.790 "supported_io_types": { 00:28:13.790 "read": true, 00:28:13.790 "write": true, 00:28:13.790 "unmap": true, 00:28:13.790 "flush": true, 00:28:13.790 "reset": true, 00:28:13.790 "nvme_admin": false, 00:28:13.790 "nvme_io": false, 00:28:13.790 "nvme_io_md": false, 00:28:13.790 "write_zeroes": true, 00:28:13.790 "zcopy": true, 00:28:13.790 "get_zone_info": false, 00:28:13.790 "zone_management": false, 00:28:13.790 "zone_append": false, 00:28:13.790 "compare": false, 00:28:13.790 "compare_and_write": false, 00:28:13.790 "abort": true, 00:28:13.790 "seek_hole": false, 00:28:13.790 "seek_data": false, 00:28:13.790 "copy": true, 00:28:13.790 "nvme_iov_md": false 00:28:13.790 }, 00:28:13.790 "memory_domains": [ 00:28:13.790 { 00:28:13.790 "dma_device_id": "system", 00:28:13.790 "dma_device_type": 1 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.790 "dma_device_type": 2 00:28:13.790 } 00:28:13.790 ], 00:28:13.790 "driver_specific": {} 00:28:13.790 }' 00:28:13.790 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:13.790 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:13.790 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:13.790 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:13.790 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:14.047 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:14.047 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:14.047 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:14.047 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:14.047 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:14.047 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:14.047 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:14.047 11:39:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:14.304 [2024-07-25 11:39:30.137257] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:14.304 [2024-07-25 11:39:30.137523] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:14.304 [2024-07-25 11:39:30.137789] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:14.304 [2024-07-25 11:39:30.138315] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:14.304 [2024-07-25 11:39:30.138351] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 95421 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 95421 ']' 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 95421 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95421 00:28:14.304 killing process with pid 95421 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95421' 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 95421 00:28:14.304 [2024-07-25 11:39:30.185042] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:14.304 11:39:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 95421 00:28:14.868 [2024-07-25 11:39:30.530413] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:16.240 ************************************ 00:28:16.240 END TEST raid5f_state_function_test 00:28:16.240 ************************************ 00:28:16.240 11:39:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:28:16.240 00:28:16.240 real 0m39.516s 00:28:16.240 user 1m12.628s 00:28:16.240 sys 0m5.021s 00:28:16.240 11:39:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:16.240 11:39:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:16.240 11:39:31 bdev_raid -- bdev/bdev_raid.sh@966 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:28:16.240 11:39:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:16.240 11:39:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.240 11:39:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:16.240 ************************************ 00:28:16.240 START TEST raid5f_state_function_test_sb 00:28:16.240 ************************************ 00:28:16.240 11:39:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:28:16.240 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:28:16.240 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:28:16.240 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:28:16.240 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=96516 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 96516' 00:28:16.241 Process raid pid: 96516 00:28:16.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 96516 /var/tmp/spdk-raid.sock 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 96516 ']' 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.241 11:39:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.241 [2024-07-25 11:39:31.859691] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:28:16.241 [2024-07-25 11:39:31.859873] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.241 [2024-07-25 11:39:32.037137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.498 [2024-07-25 11:39:32.274052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.756 [2024-07-25 11:39:32.476420] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:16.756 [2024-07-25 11:39:32.476463] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:17.030 11:39:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.030 11:39:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:28:17.030 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:17.290 [2024-07-25 11:39:32.958191] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:17.290 [2024-07-25 11:39:32.958259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:17.290 [2024-07-25 11:39:32.958279] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:17.290 [2024-07-25 11:39:32.958293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:17.290 [2024-07-25 11:39:32.958307] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:17.290 [2024-07-25 11:39:32.958319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:17.290 [2024-07-25 11:39:32.958330] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:17.290 [2024-07-25 11:39:32.958342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.290 11:39:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:17.547 11:39:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:17.547 "name": "Existed_Raid", 00:28:17.547 "uuid": "87004635-1a67-4036-9029-d27a58c812fc", 00:28:17.547 "strip_size_kb": 64, 00:28:17.547 "state": "configuring", 00:28:17.547 "raid_level": "raid5f", 00:28:17.547 "superblock": true, 00:28:17.547 "num_base_bdevs": 4, 00:28:17.547 "num_base_bdevs_discovered": 0, 00:28:17.547 "num_base_bdevs_operational": 4, 00:28:17.547 "base_bdevs_list": [ 00:28:17.547 { 00:28:17.547 "name": "BaseBdev1", 00:28:17.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.547 "is_configured": false, 00:28:17.547 "data_offset": 0, 00:28:17.547 "data_size": 0 00:28:17.547 }, 00:28:17.547 { 00:28:17.547 "name": "BaseBdev2", 00:28:17.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.547 "is_configured": false, 00:28:17.547 "data_offset": 0, 00:28:17.547 "data_size": 0 00:28:17.547 }, 00:28:17.547 { 00:28:17.547 "name": "BaseBdev3", 00:28:17.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.548 "is_configured": false, 00:28:17.548 "data_offset": 0, 00:28:17.548 "data_size": 0 00:28:17.548 }, 00:28:17.548 { 00:28:17.548 "name": "BaseBdev4", 00:28:17.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.548 "is_configured": false, 00:28:17.548 "data_offset": 0, 00:28:17.548 "data_size": 0 00:28:17.548 } 00:28:17.548 ] 00:28:17.548 }' 00:28:17.548 11:39:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:17.548 11:39:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.113 11:39:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:18.370 [2024-07-25 11:39:34.130327] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:18.370 [2024-07-25 11:39:34.130384] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:18.370 11:39:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:18.627 [2024-07-25 11:39:34.394437] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:18.627 [2024-07-25 11:39:34.394507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:18.627 [2024-07-25 11:39:34.394526] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:18.627 [2024-07-25 11:39:34.394539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:18.627 [2024-07-25 11:39:34.394551] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:18.627 [2024-07-25 11:39:34.394563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:18.627 [2024-07-25 11:39:34.394577] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:18.627 [2024-07-25 11:39:34.394589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:18.627 11:39:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:18.886 [2024-07-25 11:39:34.699011] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:18.886 BaseBdev1 00:28:18.886 11:39:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:28:18.886 11:39:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:18.886 11:39:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:18.886 11:39:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:18.886 11:39:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:18.886 11:39:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:18.886 11:39:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:19.143 11:39:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:19.400 [ 00:28:19.400 { 00:28:19.400 "name": "BaseBdev1", 00:28:19.400 "aliases": [ 00:28:19.400 "b4b27fec-eed6-41ca-92ff-ee44f69fbd2b" 00:28:19.400 ], 00:28:19.400 "product_name": "Malloc disk", 00:28:19.400 "block_size": 512, 00:28:19.400 "num_blocks": 65536, 00:28:19.400 "uuid": "b4b27fec-eed6-41ca-92ff-ee44f69fbd2b", 00:28:19.400 "assigned_rate_limits": { 00:28:19.400 "rw_ios_per_sec": 0, 00:28:19.400 "rw_mbytes_per_sec": 0, 00:28:19.400 "r_mbytes_per_sec": 0, 00:28:19.400 "w_mbytes_per_sec": 0 00:28:19.401 }, 00:28:19.401 "claimed": true, 00:28:19.401 "claim_type": "exclusive_write", 00:28:19.401 "zoned": false, 00:28:19.401 "supported_io_types": { 00:28:19.401 "read": true, 00:28:19.401 "write": true, 00:28:19.401 "unmap": true, 00:28:19.401 "flush": true, 00:28:19.401 "reset": true, 00:28:19.401 "nvme_admin": false, 00:28:19.401 "nvme_io": false, 00:28:19.401 "nvme_io_md": false, 00:28:19.401 "write_zeroes": true, 00:28:19.401 "zcopy": true, 00:28:19.401 "get_zone_info": false, 00:28:19.401 "zone_management": false, 00:28:19.401 "zone_append": false, 00:28:19.401 "compare": false, 00:28:19.401 "compare_and_write": false, 00:28:19.401 "abort": true, 00:28:19.401 "seek_hole": false, 00:28:19.401 "seek_data": false, 00:28:19.401 "copy": true, 00:28:19.401 "nvme_iov_md": false 00:28:19.401 }, 00:28:19.401 "memory_domains": [ 00:28:19.401 { 00:28:19.401 "dma_device_id": "system", 00:28:19.401 "dma_device_type": 1 00:28:19.401 }, 00:28:19.401 { 00:28:19.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:19.401 "dma_device_type": 2 00:28:19.401 } 00:28:19.401 ], 00:28:19.401 "driver_specific": {} 00:28:19.401 } 00:28:19.401 ] 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.658 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.916 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:19.916 "name": "Existed_Raid", 00:28:19.916 "uuid": "25913087-0d61-491c-8ada-86a0e603fe41", 00:28:19.916 "strip_size_kb": 64, 00:28:19.916 "state": "configuring", 00:28:19.916 "raid_level": "raid5f", 00:28:19.916 "superblock": true, 00:28:19.916 "num_base_bdevs": 4, 00:28:19.916 "num_base_bdevs_discovered": 1, 00:28:19.916 "num_base_bdevs_operational": 4, 00:28:19.916 "base_bdevs_list": [ 00:28:19.916 { 00:28:19.916 "name": "BaseBdev1", 00:28:19.916 "uuid": "b4b27fec-eed6-41ca-92ff-ee44f69fbd2b", 00:28:19.916 "is_configured": true, 00:28:19.916 "data_offset": 2048, 00:28:19.916 "data_size": 63488 00:28:19.916 }, 00:28:19.916 { 00:28:19.916 "name": "BaseBdev2", 00:28:19.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.916 "is_configured": false, 00:28:19.916 "data_offset": 0, 00:28:19.916 "data_size": 0 00:28:19.916 }, 00:28:19.916 { 00:28:19.916 "name": "BaseBdev3", 00:28:19.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.916 "is_configured": false, 00:28:19.916 "data_offset": 0, 00:28:19.916 "data_size": 0 00:28:19.916 }, 00:28:19.916 { 00:28:19.916 "name": "BaseBdev4", 00:28:19.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.916 "is_configured": false, 00:28:19.916 "data_offset": 0, 00:28:19.916 "data_size": 0 00:28:19.916 } 00:28:19.916 ] 00:28:19.916 }' 00:28:19.916 11:39:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:19.916 11:39:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.481 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:20.739 [2024-07-25 11:39:36.471513] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:20.739 [2024-07-25 11:39:36.471592] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:20.739 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:20.997 [2024-07-25 11:39:36.707648] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:20.997 [2024-07-25 11:39:36.710000] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:20.997 [2024-07-25 11:39:36.710051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:20.997 [2024-07-25 11:39:36.710071] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:20.997 [2024-07-25 11:39:36.710085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:20.997 [2024-07-25 11:39:36.710101] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:20.997 [2024-07-25 11:39:36.710113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.997 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:21.255 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:21.255 "name": "Existed_Raid", 00:28:21.255 "uuid": "fbbcd0e2-fc28-41a0-bed8-17560a380fc7", 00:28:21.255 "strip_size_kb": 64, 00:28:21.255 "state": "configuring", 00:28:21.255 "raid_level": "raid5f", 00:28:21.255 "superblock": true, 00:28:21.255 "num_base_bdevs": 4, 00:28:21.255 "num_base_bdevs_discovered": 1, 00:28:21.255 "num_base_bdevs_operational": 4, 00:28:21.255 "base_bdevs_list": [ 00:28:21.255 { 00:28:21.255 "name": "BaseBdev1", 00:28:21.255 "uuid": "b4b27fec-eed6-41ca-92ff-ee44f69fbd2b", 00:28:21.255 "is_configured": true, 00:28:21.255 "data_offset": 2048, 00:28:21.255 "data_size": 63488 00:28:21.255 }, 00:28:21.255 { 00:28:21.255 "name": "BaseBdev2", 00:28:21.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.256 "is_configured": false, 00:28:21.256 "data_offset": 0, 00:28:21.256 "data_size": 0 00:28:21.256 }, 00:28:21.256 { 00:28:21.256 "name": "BaseBdev3", 00:28:21.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.256 "is_configured": false, 00:28:21.256 "data_offset": 0, 00:28:21.256 "data_size": 0 00:28:21.256 }, 00:28:21.256 { 00:28:21.256 "name": "BaseBdev4", 00:28:21.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.256 "is_configured": false, 00:28:21.256 "data_offset": 0, 00:28:21.256 "data_size": 0 00:28:21.256 } 00:28:21.256 ] 00:28:21.256 }' 00:28:21.256 11:39:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:21.256 11:39:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.821 11:39:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:22.388 [2024-07-25 11:39:37.978033] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:22.388 BaseBdev2 00:28:22.388 11:39:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:28:22.388 11:39:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:22.388 11:39:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:22.388 11:39:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:22.388 11:39:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:22.388 11:39:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:22.388 11:39:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:22.388 11:39:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:22.646 [ 00:28:22.646 { 00:28:22.646 "name": "BaseBdev2", 00:28:22.646 "aliases": [ 00:28:22.646 "5a4d59a6-65d7-4d09-8efc-e7d05a36f5ee" 00:28:22.646 ], 00:28:22.646 "product_name": "Malloc disk", 00:28:22.646 "block_size": 512, 00:28:22.646 "num_blocks": 65536, 00:28:22.646 "uuid": "5a4d59a6-65d7-4d09-8efc-e7d05a36f5ee", 00:28:22.646 "assigned_rate_limits": { 00:28:22.646 "rw_ios_per_sec": 0, 00:28:22.646 "rw_mbytes_per_sec": 0, 00:28:22.646 "r_mbytes_per_sec": 0, 00:28:22.646 "w_mbytes_per_sec": 0 00:28:22.646 }, 00:28:22.646 "claimed": true, 00:28:22.646 "claim_type": "exclusive_write", 00:28:22.646 "zoned": false, 00:28:22.646 "supported_io_types": { 00:28:22.646 "read": true, 00:28:22.646 "write": true, 00:28:22.646 "unmap": true, 00:28:22.646 "flush": true, 00:28:22.646 "reset": true, 00:28:22.646 "nvme_admin": false, 00:28:22.646 "nvme_io": false, 00:28:22.646 "nvme_io_md": false, 00:28:22.646 "write_zeroes": true, 00:28:22.646 "zcopy": true, 00:28:22.646 "get_zone_info": false, 00:28:22.646 "zone_management": false, 00:28:22.646 "zone_append": false, 00:28:22.646 "compare": false, 00:28:22.646 "compare_and_write": false, 00:28:22.646 "abort": true, 00:28:22.646 "seek_hole": false, 00:28:22.646 "seek_data": false, 00:28:22.646 "copy": true, 00:28:22.646 "nvme_iov_md": false 00:28:22.646 }, 00:28:22.646 "memory_domains": [ 00:28:22.646 { 00:28:22.646 "dma_device_id": "system", 00:28:22.646 "dma_device_type": 1 00:28:22.646 }, 00:28:22.646 { 00:28:22.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.646 "dma_device_type": 2 00:28:22.646 } 00:28:22.646 ], 00:28:22.646 "driver_specific": {} 00:28:22.646 } 00:28:22.646 ] 00:28:22.646 11:39:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:22.646 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:22.646 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:22.646 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:22.646 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:22.647 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:22.647 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:22.647 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:22.647 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:22.647 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.647 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.647 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.647 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.647 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.647 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:22.905 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:22.905 "name": "Existed_Raid", 00:28:22.905 "uuid": "fbbcd0e2-fc28-41a0-bed8-17560a380fc7", 00:28:22.905 "strip_size_kb": 64, 00:28:22.905 "state": "configuring", 00:28:22.905 "raid_level": "raid5f", 00:28:22.905 "superblock": true, 00:28:22.905 "num_base_bdevs": 4, 00:28:22.905 "num_base_bdevs_discovered": 2, 00:28:22.905 "num_base_bdevs_operational": 4, 00:28:22.905 "base_bdevs_list": [ 00:28:22.905 { 00:28:22.905 "name": "BaseBdev1", 00:28:22.905 "uuid": "b4b27fec-eed6-41ca-92ff-ee44f69fbd2b", 00:28:22.905 "is_configured": true, 00:28:22.905 "data_offset": 2048, 00:28:22.905 "data_size": 63488 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "name": "BaseBdev2", 00:28:22.905 "uuid": "5a4d59a6-65d7-4d09-8efc-e7d05a36f5ee", 00:28:22.905 "is_configured": true, 00:28:22.905 "data_offset": 2048, 00:28:22.905 "data_size": 63488 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "name": "BaseBdev3", 00:28:22.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.905 "is_configured": false, 00:28:22.905 "data_offset": 0, 00:28:22.905 "data_size": 0 00:28:22.905 }, 00:28:22.905 { 00:28:22.905 "name": "BaseBdev4", 00:28:22.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.905 "is_configured": false, 00:28:22.905 "data_offset": 0, 00:28:22.905 "data_size": 0 00:28:22.905 } 00:28:22.905 ] 00:28:22.905 }' 00:28:22.905 11:39:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:22.905 11:39:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.870 11:39:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:23.870 [2024-07-25 11:39:39.673601] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:23.870 BaseBdev3 00:28:23.870 11:39:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:28:23.870 11:39:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:23.870 11:39:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:23.870 11:39:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:23.870 11:39:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:23.870 11:39:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:23.870 11:39:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:24.128 11:39:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:24.386 [ 00:28:24.386 { 00:28:24.386 "name": "BaseBdev3", 00:28:24.386 "aliases": [ 00:28:24.386 "a6ee7331-026c-4f01-bf34-5329fc26eabf" 00:28:24.386 ], 00:28:24.386 "product_name": "Malloc disk", 00:28:24.386 "block_size": 512, 00:28:24.386 "num_blocks": 65536, 00:28:24.386 "uuid": "a6ee7331-026c-4f01-bf34-5329fc26eabf", 00:28:24.386 "assigned_rate_limits": { 00:28:24.386 "rw_ios_per_sec": 0, 00:28:24.386 "rw_mbytes_per_sec": 0, 00:28:24.386 "r_mbytes_per_sec": 0, 00:28:24.386 "w_mbytes_per_sec": 0 00:28:24.386 }, 00:28:24.386 "claimed": true, 00:28:24.386 "claim_type": "exclusive_write", 00:28:24.387 "zoned": false, 00:28:24.387 "supported_io_types": { 00:28:24.387 "read": true, 00:28:24.387 "write": true, 00:28:24.387 "unmap": true, 00:28:24.387 "flush": true, 00:28:24.387 "reset": true, 00:28:24.387 "nvme_admin": false, 00:28:24.387 "nvme_io": false, 00:28:24.387 "nvme_io_md": false, 00:28:24.387 "write_zeroes": true, 00:28:24.387 "zcopy": true, 00:28:24.387 "get_zone_info": false, 00:28:24.387 "zone_management": false, 00:28:24.387 "zone_append": false, 00:28:24.387 "compare": false, 00:28:24.387 "compare_and_write": false, 00:28:24.387 "abort": true, 00:28:24.387 "seek_hole": false, 00:28:24.387 "seek_data": false, 00:28:24.387 "copy": true, 00:28:24.387 "nvme_iov_md": false 00:28:24.387 }, 00:28:24.387 "memory_domains": [ 00:28:24.387 { 00:28:24.387 "dma_device_id": "system", 00:28:24.387 "dma_device_type": 1 00:28:24.387 }, 00:28:24.387 { 00:28:24.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:24.387 "dma_device_type": 2 00:28:24.387 } 00:28:24.387 ], 00:28:24.387 "driver_specific": {} 00:28:24.387 } 00:28:24.387 ] 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:24.387 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:24.646 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.646 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:24.646 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.646 "name": "Existed_Raid", 00:28:24.646 "uuid": "fbbcd0e2-fc28-41a0-bed8-17560a380fc7", 00:28:24.646 "strip_size_kb": 64, 00:28:24.646 "state": "configuring", 00:28:24.646 "raid_level": "raid5f", 00:28:24.646 "superblock": true, 00:28:24.646 "num_base_bdevs": 4, 00:28:24.646 "num_base_bdevs_discovered": 3, 00:28:24.646 "num_base_bdevs_operational": 4, 00:28:24.646 "base_bdevs_list": [ 00:28:24.646 { 00:28:24.646 "name": "BaseBdev1", 00:28:24.646 "uuid": "b4b27fec-eed6-41ca-92ff-ee44f69fbd2b", 00:28:24.646 "is_configured": true, 00:28:24.646 "data_offset": 2048, 00:28:24.646 "data_size": 63488 00:28:24.646 }, 00:28:24.646 { 00:28:24.646 "name": "BaseBdev2", 00:28:24.646 "uuid": "5a4d59a6-65d7-4d09-8efc-e7d05a36f5ee", 00:28:24.646 "is_configured": true, 00:28:24.646 "data_offset": 2048, 00:28:24.646 "data_size": 63488 00:28:24.646 }, 00:28:24.646 { 00:28:24.646 "name": "BaseBdev3", 00:28:24.646 "uuid": "a6ee7331-026c-4f01-bf34-5329fc26eabf", 00:28:24.646 "is_configured": true, 00:28:24.646 "data_offset": 2048, 00:28:24.646 "data_size": 63488 00:28:24.646 }, 00:28:24.646 { 00:28:24.646 "name": "BaseBdev4", 00:28:24.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.646 "is_configured": false, 00:28:24.646 "data_offset": 0, 00:28:24.646 "data_size": 0 00:28:24.646 } 00:28:24.646 ] 00:28:24.646 }' 00:28:24.646 11:39:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.646 11:39:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:25.581 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:25.581 [2024-07-25 11:39:41.411753] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:25.581 [2024-07-25 11:39:41.412372] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:25.581 [2024-07-25 11:39:41.412533] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:25.581 [2024-07-25 11:39:41.412931] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:25.581 BaseBdev4 00:28:25.581 [2024-07-25 11:39:41.420073] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:25.581 [2024-07-25 11:39:41.420229] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:25.581 [2024-07-25 11:39:41.420613] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:25.581 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:28:25.581 11:39:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:28:25.581 11:39:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:25.581 11:39:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:25.581 11:39:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:25.581 11:39:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:25.581 11:39:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:25.839 11:39:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:26.097 [ 00:28:26.097 { 00:28:26.097 "name": "BaseBdev4", 00:28:26.097 "aliases": [ 00:28:26.097 "e2bd3bad-59e9-41d0-8d28-671292938f5e" 00:28:26.097 ], 00:28:26.097 "product_name": "Malloc disk", 00:28:26.097 "block_size": 512, 00:28:26.097 "num_blocks": 65536, 00:28:26.097 "uuid": "e2bd3bad-59e9-41d0-8d28-671292938f5e", 00:28:26.097 "assigned_rate_limits": { 00:28:26.097 "rw_ios_per_sec": 0, 00:28:26.097 "rw_mbytes_per_sec": 0, 00:28:26.097 "r_mbytes_per_sec": 0, 00:28:26.097 "w_mbytes_per_sec": 0 00:28:26.097 }, 00:28:26.097 "claimed": true, 00:28:26.097 "claim_type": "exclusive_write", 00:28:26.097 "zoned": false, 00:28:26.097 "supported_io_types": { 00:28:26.097 "read": true, 00:28:26.097 "write": true, 00:28:26.097 "unmap": true, 00:28:26.097 "flush": true, 00:28:26.097 "reset": true, 00:28:26.097 "nvme_admin": false, 00:28:26.097 "nvme_io": false, 00:28:26.097 "nvme_io_md": false, 00:28:26.097 "write_zeroes": true, 00:28:26.097 "zcopy": true, 00:28:26.097 "get_zone_info": false, 00:28:26.097 "zone_management": false, 00:28:26.097 "zone_append": false, 00:28:26.097 "compare": false, 00:28:26.097 "compare_and_write": false, 00:28:26.097 "abort": true, 00:28:26.097 "seek_hole": false, 00:28:26.097 "seek_data": false, 00:28:26.097 "copy": true, 00:28:26.097 "nvme_iov_md": false 00:28:26.097 }, 00:28:26.097 "memory_domains": [ 00:28:26.097 { 00:28:26.097 "dma_device_id": "system", 00:28:26.097 "dma_device_type": 1 00:28:26.097 }, 00:28:26.097 { 00:28:26.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.097 "dma_device_type": 2 00:28:26.097 } 00:28:26.097 ], 00:28:26.097 "driver_specific": {} 00:28:26.097 } 00:28:26.097 ] 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.097 11:39:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:26.356 11:39:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.356 "name": "Existed_Raid", 00:28:26.356 "uuid": "fbbcd0e2-fc28-41a0-bed8-17560a380fc7", 00:28:26.356 "strip_size_kb": 64, 00:28:26.356 "state": "online", 00:28:26.356 "raid_level": "raid5f", 00:28:26.356 "superblock": true, 00:28:26.356 "num_base_bdevs": 4, 00:28:26.356 "num_base_bdevs_discovered": 4, 00:28:26.356 "num_base_bdevs_operational": 4, 00:28:26.356 "base_bdevs_list": [ 00:28:26.356 { 00:28:26.356 "name": "BaseBdev1", 00:28:26.356 "uuid": "b4b27fec-eed6-41ca-92ff-ee44f69fbd2b", 00:28:26.356 "is_configured": true, 00:28:26.356 "data_offset": 2048, 00:28:26.356 "data_size": 63488 00:28:26.356 }, 00:28:26.356 { 00:28:26.356 "name": "BaseBdev2", 00:28:26.356 "uuid": "5a4d59a6-65d7-4d09-8efc-e7d05a36f5ee", 00:28:26.356 "is_configured": true, 00:28:26.356 "data_offset": 2048, 00:28:26.356 "data_size": 63488 00:28:26.356 }, 00:28:26.356 { 00:28:26.356 "name": "BaseBdev3", 00:28:26.356 "uuid": "a6ee7331-026c-4f01-bf34-5329fc26eabf", 00:28:26.356 "is_configured": true, 00:28:26.356 "data_offset": 2048, 00:28:26.356 "data_size": 63488 00:28:26.356 }, 00:28:26.356 { 00:28:26.356 "name": "BaseBdev4", 00:28:26.356 "uuid": "e2bd3bad-59e9-41d0-8d28-671292938f5e", 00:28:26.356 "is_configured": true, 00:28:26.356 "data_offset": 2048, 00:28:26.356 "data_size": 63488 00:28:26.356 } 00:28:26.356 ] 00:28:26.356 }' 00:28:26.356 11:39:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.356 11:39:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:27.290 11:39:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:28:27.290 11:39:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:27.290 11:39:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:27.290 11:39:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:27.290 11:39:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:27.290 11:39:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:28:27.290 11:39:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:27.290 11:39:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:27.290 [2024-07-25 11:39:43.084623] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:27.290 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:27.290 "name": "Existed_Raid", 00:28:27.290 "aliases": [ 00:28:27.290 "fbbcd0e2-fc28-41a0-bed8-17560a380fc7" 00:28:27.290 ], 00:28:27.290 "product_name": "Raid Volume", 00:28:27.290 "block_size": 512, 00:28:27.290 "num_blocks": 190464, 00:28:27.290 "uuid": "fbbcd0e2-fc28-41a0-bed8-17560a380fc7", 00:28:27.290 "assigned_rate_limits": { 00:28:27.290 "rw_ios_per_sec": 0, 00:28:27.290 "rw_mbytes_per_sec": 0, 00:28:27.290 "r_mbytes_per_sec": 0, 00:28:27.290 "w_mbytes_per_sec": 0 00:28:27.290 }, 00:28:27.290 "claimed": false, 00:28:27.290 "zoned": false, 00:28:27.290 "supported_io_types": { 00:28:27.290 "read": true, 00:28:27.290 "write": true, 00:28:27.290 "unmap": false, 00:28:27.290 "flush": false, 00:28:27.290 "reset": true, 00:28:27.290 "nvme_admin": false, 00:28:27.290 "nvme_io": false, 00:28:27.290 "nvme_io_md": false, 00:28:27.290 "write_zeroes": true, 00:28:27.291 "zcopy": false, 00:28:27.291 "get_zone_info": false, 00:28:27.291 "zone_management": false, 00:28:27.291 "zone_append": false, 00:28:27.291 "compare": false, 00:28:27.291 "compare_and_write": false, 00:28:27.291 "abort": false, 00:28:27.291 "seek_hole": false, 00:28:27.291 "seek_data": false, 00:28:27.291 "copy": false, 00:28:27.291 "nvme_iov_md": false 00:28:27.291 }, 00:28:27.291 "driver_specific": { 00:28:27.291 "raid": { 00:28:27.291 "uuid": "fbbcd0e2-fc28-41a0-bed8-17560a380fc7", 00:28:27.291 "strip_size_kb": 64, 00:28:27.291 "state": "online", 00:28:27.291 "raid_level": "raid5f", 00:28:27.291 "superblock": true, 00:28:27.291 "num_base_bdevs": 4, 00:28:27.291 "num_base_bdevs_discovered": 4, 00:28:27.291 "num_base_bdevs_operational": 4, 00:28:27.291 "base_bdevs_list": [ 00:28:27.291 { 00:28:27.291 "name": "BaseBdev1", 00:28:27.291 "uuid": "b4b27fec-eed6-41ca-92ff-ee44f69fbd2b", 00:28:27.291 "is_configured": true, 00:28:27.291 "data_offset": 2048, 00:28:27.291 "data_size": 63488 00:28:27.291 }, 00:28:27.291 { 00:28:27.291 "name": "BaseBdev2", 00:28:27.291 "uuid": "5a4d59a6-65d7-4d09-8efc-e7d05a36f5ee", 00:28:27.291 "is_configured": true, 00:28:27.291 "data_offset": 2048, 00:28:27.291 "data_size": 63488 00:28:27.291 }, 00:28:27.291 { 00:28:27.291 "name": "BaseBdev3", 00:28:27.291 "uuid": "a6ee7331-026c-4f01-bf34-5329fc26eabf", 00:28:27.291 "is_configured": true, 00:28:27.291 "data_offset": 2048, 00:28:27.291 "data_size": 63488 00:28:27.291 }, 00:28:27.291 { 00:28:27.291 "name": "BaseBdev4", 00:28:27.291 "uuid": "e2bd3bad-59e9-41d0-8d28-671292938f5e", 00:28:27.291 "is_configured": true, 00:28:27.291 "data_offset": 2048, 00:28:27.291 "data_size": 63488 00:28:27.291 } 00:28:27.291 ] 00:28:27.291 } 00:28:27.291 } 00:28:27.291 }' 00:28:27.291 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:27.291 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:28:27.291 BaseBdev2 00:28:27.291 BaseBdev3 00:28:27.291 BaseBdev4' 00:28:27.291 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:27.291 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:28:27.291 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:27.856 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:27.856 "name": "BaseBdev1", 00:28:27.856 "aliases": [ 00:28:27.856 "b4b27fec-eed6-41ca-92ff-ee44f69fbd2b" 00:28:27.856 ], 00:28:27.856 "product_name": "Malloc disk", 00:28:27.856 "block_size": 512, 00:28:27.856 "num_blocks": 65536, 00:28:27.856 "uuid": "b4b27fec-eed6-41ca-92ff-ee44f69fbd2b", 00:28:27.856 "assigned_rate_limits": { 00:28:27.856 "rw_ios_per_sec": 0, 00:28:27.856 "rw_mbytes_per_sec": 0, 00:28:27.856 "r_mbytes_per_sec": 0, 00:28:27.856 "w_mbytes_per_sec": 0 00:28:27.856 }, 00:28:27.856 "claimed": true, 00:28:27.856 "claim_type": "exclusive_write", 00:28:27.856 "zoned": false, 00:28:27.856 "supported_io_types": { 00:28:27.856 "read": true, 00:28:27.856 "write": true, 00:28:27.856 "unmap": true, 00:28:27.856 "flush": true, 00:28:27.856 "reset": true, 00:28:27.856 "nvme_admin": false, 00:28:27.856 "nvme_io": false, 00:28:27.856 "nvme_io_md": false, 00:28:27.856 "write_zeroes": true, 00:28:27.856 "zcopy": true, 00:28:27.856 "get_zone_info": false, 00:28:27.856 "zone_management": false, 00:28:27.856 "zone_append": false, 00:28:27.856 "compare": false, 00:28:27.856 "compare_and_write": false, 00:28:27.856 "abort": true, 00:28:27.856 "seek_hole": false, 00:28:27.856 "seek_data": false, 00:28:27.856 "copy": true, 00:28:27.856 "nvme_iov_md": false 00:28:27.856 }, 00:28:27.856 "memory_domains": [ 00:28:27.856 { 00:28:27.856 "dma_device_id": "system", 00:28:27.856 "dma_device_type": 1 00:28:27.856 }, 00:28:27.856 { 00:28:27.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:27.856 "dma_device_type": 2 00:28:27.856 } 00:28:27.856 ], 00:28:27.856 "driver_specific": {} 00:28:27.856 }' 00:28:27.856 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:27.856 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:27.856 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:27.856 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:27.856 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:27.856 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:27.856 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:28.114 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:28.114 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:28.114 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:28.114 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:28.114 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:28.114 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:28.114 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:28.114 11:39:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:28.372 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:28.372 "name": "BaseBdev2", 00:28:28.372 "aliases": [ 00:28:28.372 "5a4d59a6-65d7-4d09-8efc-e7d05a36f5ee" 00:28:28.372 ], 00:28:28.372 "product_name": "Malloc disk", 00:28:28.372 "block_size": 512, 00:28:28.372 "num_blocks": 65536, 00:28:28.372 "uuid": "5a4d59a6-65d7-4d09-8efc-e7d05a36f5ee", 00:28:28.372 "assigned_rate_limits": { 00:28:28.372 "rw_ios_per_sec": 0, 00:28:28.372 "rw_mbytes_per_sec": 0, 00:28:28.372 "r_mbytes_per_sec": 0, 00:28:28.372 "w_mbytes_per_sec": 0 00:28:28.372 }, 00:28:28.372 "claimed": true, 00:28:28.372 "claim_type": "exclusive_write", 00:28:28.372 "zoned": false, 00:28:28.372 "supported_io_types": { 00:28:28.372 "read": true, 00:28:28.372 "write": true, 00:28:28.372 "unmap": true, 00:28:28.372 "flush": true, 00:28:28.372 "reset": true, 00:28:28.372 "nvme_admin": false, 00:28:28.372 "nvme_io": false, 00:28:28.372 "nvme_io_md": false, 00:28:28.372 "write_zeroes": true, 00:28:28.372 "zcopy": true, 00:28:28.372 "get_zone_info": false, 00:28:28.372 "zone_management": false, 00:28:28.372 "zone_append": false, 00:28:28.372 "compare": false, 00:28:28.372 "compare_and_write": false, 00:28:28.372 "abort": true, 00:28:28.372 "seek_hole": false, 00:28:28.372 "seek_data": false, 00:28:28.372 "copy": true, 00:28:28.372 "nvme_iov_md": false 00:28:28.372 }, 00:28:28.372 "memory_domains": [ 00:28:28.372 { 00:28:28.372 "dma_device_id": "system", 00:28:28.372 "dma_device_type": 1 00:28:28.372 }, 00:28:28.372 { 00:28:28.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:28.372 "dma_device_type": 2 00:28:28.372 } 00:28:28.372 ], 00:28:28.372 "driver_specific": {} 00:28:28.372 }' 00:28:28.372 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:28.372 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:28.630 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:28.630 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:28.630 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:28.630 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:28.630 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:28.630 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:28.630 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:28.630 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:28.890 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:28.890 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:28.890 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:28.890 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:28.890 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:29.152 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:29.152 "name": "BaseBdev3", 00:28:29.152 "aliases": [ 00:28:29.152 "a6ee7331-026c-4f01-bf34-5329fc26eabf" 00:28:29.152 ], 00:28:29.152 "product_name": "Malloc disk", 00:28:29.152 "block_size": 512, 00:28:29.152 "num_blocks": 65536, 00:28:29.152 "uuid": "a6ee7331-026c-4f01-bf34-5329fc26eabf", 00:28:29.152 "assigned_rate_limits": { 00:28:29.152 "rw_ios_per_sec": 0, 00:28:29.152 "rw_mbytes_per_sec": 0, 00:28:29.152 "r_mbytes_per_sec": 0, 00:28:29.152 "w_mbytes_per_sec": 0 00:28:29.152 }, 00:28:29.152 "claimed": true, 00:28:29.152 "claim_type": "exclusive_write", 00:28:29.152 "zoned": false, 00:28:29.152 "supported_io_types": { 00:28:29.152 "read": true, 00:28:29.152 "write": true, 00:28:29.152 "unmap": true, 00:28:29.152 "flush": true, 00:28:29.152 "reset": true, 00:28:29.152 "nvme_admin": false, 00:28:29.152 "nvme_io": false, 00:28:29.152 "nvme_io_md": false, 00:28:29.152 "write_zeroes": true, 00:28:29.152 "zcopy": true, 00:28:29.152 "get_zone_info": false, 00:28:29.152 "zone_management": false, 00:28:29.152 "zone_append": false, 00:28:29.152 "compare": false, 00:28:29.152 "compare_and_write": false, 00:28:29.152 "abort": true, 00:28:29.152 "seek_hole": false, 00:28:29.152 "seek_data": false, 00:28:29.152 "copy": true, 00:28:29.152 "nvme_iov_md": false 00:28:29.152 }, 00:28:29.152 "memory_domains": [ 00:28:29.152 { 00:28:29.152 "dma_device_id": "system", 00:28:29.152 "dma_device_type": 1 00:28:29.152 }, 00:28:29.152 { 00:28:29.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:29.152 "dma_device_type": 2 00:28:29.152 } 00:28:29.152 ], 00:28:29.152 "driver_specific": {} 00:28:29.152 }' 00:28:29.152 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:29.152 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:29.152 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:29.152 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:29.152 11:39:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:29.410 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:29.410 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:29.410 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:29.410 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:29.410 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:29.410 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:29.410 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:29.410 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:29.410 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:29.410 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:29.668 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:29.668 "name": "BaseBdev4", 00:28:29.668 "aliases": [ 00:28:29.668 "e2bd3bad-59e9-41d0-8d28-671292938f5e" 00:28:29.668 ], 00:28:29.668 "product_name": "Malloc disk", 00:28:29.668 "block_size": 512, 00:28:29.668 "num_blocks": 65536, 00:28:29.668 "uuid": "e2bd3bad-59e9-41d0-8d28-671292938f5e", 00:28:29.668 "assigned_rate_limits": { 00:28:29.668 "rw_ios_per_sec": 0, 00:28:29.668 "rw_mbytes_per_sec": 0, 00:28:29.668 "r_mbytes_per_sec": 0, 00:28:29.668 "w_mbytes_per_sec": 0 00:28:29.668 }, 00:28:29.668 "claimed": true, 00:28:29.668 "claim_type": "exclusive_write", 00:28:29.668 "zoned": false, 00:28:29.668 "supported_io_types": { 00:28:29.668 "read": true, 00:28:29.668 "write": true, 00:28:29.668 "unmap": true, 00:28:29.668 "flush": true, 00:28:29.668 "reset": true, 00:28:29.668 "nvme_admin": false, 00:28:29.668 "nvme_io": false, 00:28:29.668 "nvme_io_md": false, 00:28:29.668 "write_zeroes": true, 00:28:29.668 "zcopy": true, 00:28:29.668 "get_zone_info": false, 00:28:29.668 "zone_management": false, 00:28:29.668 "zone_append": false, 00:28:29.668 "compare": false, 00:28:29.668 "compare_and_write": false, 00:28:29.668 "abort": true, 00:28:29.668 "seek_hole": false, 00:28:29.668 "seek_data": false, 00:28:29.668 "copy": true, 00:28:29.668 "nvme_iov_md": false 00:28:29.668 }, 00:28:29.668 "memory_domains": [ 00:28:29.668 { 00:28:29.668 "dma_device_id": "system", 00:28:29.668 "dma_device_type": 1 00:28:29.668 }, 00:28:29.668 { 00:28:29.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:29.668 "dma_device_type": 2 00:28:29.668 } 00:28:29.668 ], 00:28:29.668 "driver_specific": {} 00:28:29.668 }' 00:28:29.668 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:29.926 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:29.926 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:29.926 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:29.926 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:29.926 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:29.926 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:29.926 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:30.184 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:30.184 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:30.184 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:30.184 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:30.185 11:39:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:30.442 [2024-07-25 11:39:46.205442] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:30.442 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:28:30.442 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:28:30.442 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:30.442 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:28:30.442 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:28:30.442 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:28:30.442 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:30.442 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:30.442 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:30.442 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:30.443 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:30.443 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:30.443 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:30.443 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:30.443 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:30.443 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.443 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:30.700 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:30.700 "name": "Existed_Raid", 00:28:30.700 "uuid": "fbbcd0e2-fc28-41a0-bed8-17560a380fc7", 00:28:30.700 "strip_size_kb": 64, 00:28:30.700 "state": "online", 00:28:30.700 "raid_level": "raid5f", 00:28:30.700 "superblock": true, 00:28:30.700 "num_base_bdevs": 4, 00:28:30.700 "num_base_bdevs_discovered": 3, 00:28:30.700 "num_base_bdevs_operational": 3, 00:28:30.700 "base_bdevs_list": [ 00:28:30.700 { 00:28:30.700 "name": null, 00:28:30.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.700 "is_configured": false, 00:28:30.700 "data_offset": 2048, 00:28:30.700 "data_size": 63488 00:28:30.700 }, 00:28:30.700 { 00:28:30.700 "name": "BaseBdev2", 00:28:30.700 "uuid": "5a4d59a6-65d7-4d09-8efc-e7d05a36f5ee", 00:28:30.701 "is_configured": true, 00:28:30.701 "data_offset": 2048, 00:28:30.701 "data_size": 63488 00:28:30.701 }, 00:28:30.701 { 00:28:30.701 "name": "BaseBdev3", 00:28:30.701 "uuid": "a6ee7331-026c-4f01-bf34-5329fc26eabf", 00:28:30.701 "is_configured": true, 00:28:30.701 "data_offset": 2048, 00:28:30.701 "data_size": 63488 00:28:30.701 }, 00:28:30.701 { 00:28:30.701 "name": "BaseBdev4", 00:28:30.701 "uuid": "e2bd3bad-59e9-41d0-8d28-671292938f5e", 00:28:30.701 "is_configured": true, 00:28:30.701 "data_offset": 2048, 00:28:30.701 "data_size": 63488 00:28:30.701 } 00:28:30.701 ] 00:28:30.701 }' 00:28:30.701 11:39:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:30.701 11:39:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:31.267 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:31.267 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:31.267 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.267 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:31.832 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:31.833 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:31.833 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:31.833 [2024-07-25 11:39:47.691352] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:31.833 [2024-07-25 11:39:47.691766] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:32.091 [2024-07-25 11:39:47.777491] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:32.091 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:32.091 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:32.091 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.091 11:39:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:32.348 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:32.348 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:32.348 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:32.636 [2024-07-25 11:39:48.285765] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:32.636 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:32.636 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:32.636 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.636 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:32.895 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:32.895 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:32.895 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:33.153 [2024-07-25 11:39:48.885856] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:33.153 [2024-07-25 11:39:48.886156] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:33.153 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:33.153 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:33.153 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:33.153 11:39:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.412 11:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:33.412 11:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:33.412 11:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:28:33.412 11:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:28:33.412 11:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:33.412 11:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:33.671 BaseBdev2 00:28:33.671 11:39:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:28:33.671 11:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:33.671 11:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:33.671 11:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:33.671 11:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:33.671 11:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:33.671 11:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:33.931 11:39:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:34.190 [ 00:28:34.190 { 00:28:34.190 "name": "BaseBdev2", 00:28:34.190 "aliases": [ 00:28:34.190 "075685f9-d466-4784-8864-ac699b84a4c1" 00:28:34.190 ], 00:28:34.190 "product_name": "Malloc disk", 00:28:34.190 "block_size": 512, 00:28:34.190 "num_blocks": 65536, 00:28:34.190 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:34.190 "assigned_rate_limits": { 00:28:34.190 "rw_ios_per_sec": 0, 00:28:34.190 "rw_mbytes_per_sec": 0, 00:28:34.190 "r_mbytes_per_sec": 0, 00:28:34.190 "w_mbytes_per_sec": 0 00:28:34.190 }, 00:28:34.190 "claimed": false, 00:28:34.190 "zoned": false, 00:28:34.190 "supported_io_types": { 00:28:34.190 "read": true, 00:28:34.190 "write": true, 00:28:34.190 "unmap": true, 00:28:34.190 "flush": true, 00:28:34.190 "reset": true, 00:28:34.190 "nvme_admin": false, 00:28:34.190 "nvme_io": false, 00:28:34.190 "nvme_io_md": false, 00:28:34.190 "write_zeroes": true, 00:28:34.190 "zcopy": true, 00:28:34.190 "get_zone_info": false, 00:28:34.190 "zone_management": false, 00:28:34.190 "zone_append": false, 00:28:34.190 "compare": false, 00:28:34.190 "compare_and_write": false, 00:28:34.190 "abort": true, 00:28:34.190 "seek_hole": false, 00:28:34.190 "seek_data": false, 00:28:34.190 "copy": true, 00:28:34.190 "nvme_iov_md": false 00:28:34.190 }, 00:28:34.190 "memory_domains": [ 00:28:34.190 { 00:28:34.190 "dma_device_id": "system", 00:28:34.190 "dma_device_type": 1 00:28:34.190 }, 00:28:34.190 { 00:28:34.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:34.190 "dma_device_type": 2 00:28:34.190 } 00:28:34.190 ], 00:28:34.190 "driver_specific": {} 00:28:34.190 } 00:28:34.190 ] 00:28:34.190 11:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:34.190 11:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:34.190 11:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:34.190 11:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:34.447 BaseBdev3 00:28:34.447 11:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:28:34.447 11:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:34.447 11:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:34.447 11:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:34.447 11:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:34.447 11:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:34.447 11:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:34.705 11:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:34.962 [ 00:28:34.962 { 00:28:34.962 "name": "BaseBdev3", 00:28:34.962 "aliases": [ 00:28:34.962 "d3568e85-7460-4011-8ccd-b648612cbb7e" 00:28:34.962 ], 00:28:34.962 "product_name": "Malloc disk", 00:28:34.962 "block_size": 512, 00:28:34.962 "num_blocks": 65536, 00:28:34.962 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:34.962 "assigned_rate_limits": { 00:28:34.962 "rw_ios_per_sec": 0, 00:28:34.962 "rw_mbytes_per_sec": 0, 00:28:34.962 "r_mbytes_per_sec": 0, 00:28:34.962 "w_mbytes_per_sec": 0 00:28:34.962 }, 00:28:34.962 "claimed": false, 00:28:34.962 "zoned": false, 00:28:34.962 "supported_io_types": { 00:28:34.962 "read": true, 00:28:34.962 "write": true, 00:28:34.962 "unmap": true, 00:28:34.962 "flush": true, 00:28:34.962 "reset": true, 00:28:34.962 "nvme_admin": false, 00:28:34.962 "nvme_io": false, 00:28:34.962 "nvme_io_md": false, 00:28:34.962 "write_zeroes": true, 00:28:34.962 "zcopy": true, 00:28:34.962 "get_zone_info": false, 00:28:34.962 "zone_management": false, 00:28:34.962 "zone_append": false, 00:28:34.962 "compare": false, 00:28:34.962 "compare_and_write": false, 00:28:34.962 "abort": true, 00:28:34.962 "seek_hole": false, 00:28:34.962 "seek_data": false, 00:28:34.962 "copy": true, 00:28:34.962 "nvme_iov_md": false 00:28:34.962 }, 00:28:34.962 "memory_domains": [ 00:28:34.962 { 00:28:34.962 "dma_device_id": "system", 00:28:34.962 "dma_device_type": 1 00:28:34.962 }, 00:28:34.962 { 00:28:34.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:34.962 "dma_device_type": 2 00:28:34.962 } 00:28:34.962 ], 00:28:34.962 "driver_specific": {} 00:28:34.962 } 00:28:34.962 ] 00:28:34.962 11:39:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:34.962 11:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:34.962 11:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:34.962 11:39:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:35.220 BaseBdev4 00:28:35.220 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:28:35.220 11:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:28:35.220 11:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:35.220 11:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:35.220 11:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:35.220 11:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:35.220 11:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:35.478 11:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:35.735 [ 00:28:35.735 { 00:28:35.735 "name": "BaseBdev4", 00:28:35.735 "aliases": [ 00:28:35.735 "70903e66-89cb-47f9-98ea-d211aba58a53" 00:28:35.735 ], 00:28:35.735 "product_name": "Malloc disk", 00:28:35.735 "block_size": 512, 00:28:35.735 "num_blocks": 65536, 00:28:35.735 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:35.735 "assigned_rate_limits": { 00:28:35.735 "rw_ios_per_sec": 0, 00:28:35.735 "rw_mbytes_per_sec": 0, 00:28:35.735 "r_mbytes_per_sec": 0, 00:28:35.735 "w_mbytes_per_sec": 0 00:28:35.735 }, 00:28:35.735 "claimed": false, 00:28:35.735 "zoned": false, 00:28:35.735 "supported_io_types": { 00:28:35.735 "read": true, 00:28:35.735 "write": true, 00:28:35.735 "unmap": true, 00:28:35.735 "flush": true, 00:28:35.735 "reset": true, 00:28:35.735 "nvme_admin": false, 00:28:35.735 "nvme_io": false, 00:28:35.735 "nvme_io_md": false, 00:28:35.735 "write_zeroes": true, 00:28:35.735 "zcopy": true, 00:28:35.735 "get_zone_info": false, 00:28:35.735 "zone_management": false, 00:28:35.735 "zone_append": false, 00:28:35.735 "compare": false, 00:28:35.735 "compare_and_write": false, 00:28:35.735 "abort": true, 00:28:35.735 "seek_hole": false, 00:28:35.735 "seek_data": false, 00:28:35.735 "copy": true, 00:28:35.735 "nvme_iov_md": false 00:28:35.735 }, 00:28:35.735 "memory_domains": [ 00:28:35.735 { 00:28:35.735 "dma_device_id": "system", 00:28:35.735 "dma_device_type": 1 00:28:35.735 }, 00:28:35.735 { 00:28:35.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:35.735 "dma_device_type": 2 00:28:35.735 } 00:28:35.735 ], 00:28:35.735 "driver_specific": {} 00:28:35.735 } 00:28:35.735 ] 00:28:35.735 11:39:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:35.735 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:28:35.735 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:28:35.735 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:35.993 [2024-07-25 11:39:51.742666] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:35.993 [2024-07-25 11:39:51.742727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:35.993 [2024-07-25 11:39:51.742759] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:35.993 [2024-07-25 11:39:51.745073] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:35.993 [2024-07-25 11:39:51.745148] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.993 11:39:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:36.294 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:36.294 "name": "Existed_Raid", 00:28:36.294 "uuid": "682fa63e-f9f3-4e1f-bff1-ada8a077e50b", 00:28:36.294 "strip_size_kb": 64, 00:28:36.294 "state": "configuring", 00:28:36.294 "raid_level": "raid5f", 00:28:36.294 "superblock": true, 00:28:36.294 "num_base_bdevs": 4, 00:28:36.294 "num_base_bdevs_discovered": 3, 00:28:36.294 "num_base_bdevs_operational": 4, 00:28:36.294 "base_bdevs_list": [ 00:28:36.294 { 00:28:36.294 "name": "BaseBdev1", 00:28:36.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.294 "is_configured": false, 00:28:36.294 "data_offset": 0, 00:28:36.294 "data_size": 0 00:28:36.294 }, 00:28:36.294 { 00:28:36.294 "name": "BaseBdev2", 00:28:36.294 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:36.294 "is_configured": true, 00:28:36.294 "data_offset": 2048, 00:28:36.294 "data_size": 63488 00:28:36.294 }, 00:28:36.294 { 00:28:36.294 "name": "BaseBdev3", 00:28:36.294 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:36.294 "is_configured": true, 00:28:36.294 "data_offset": 2048, 00:28:36.294 "data_size": 63488 00:28:36.294 }, 00:28:36.294 { 00:28:36.294 "name": "BaseBdev4", 00:28:36.294 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:36.294 "is_configured": true, 00:28:36.294 "data_offset": 2048, 00:28:36.294 "data_size": 63488 00:28:36.294 } 00:28:36.294 ] 00:28:36.294 }' 00:28:36.294 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:36.294 11:39:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:36.860 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:37.118 [2024-07-25 11:39:52.922951] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.118 11:39:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:37.376 11:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:37.376 "name": "Existed_Raid", 00:28:37.376 "uuid": "682fa63e-f9f3-4e1f-bff1-ada8a077e50b", 00:28:37.376 "strip_size_kb": 64, 00:28:37.376 "state": "configuring", 00:28:37.376 "raid_level": "raid5f", 00:28:37.376 "superblock": true, 00:28:37.376 "num_base_bdevs": 4, 00:28:37.376 "num_base_bdevs_discovered": 2, 00:28:37.376 "num_base_bdevs_operational": 4, 00:28:37.376 "base_bdevs_list": [ 00:28:37.376 { 00:28:37.376 "name": "BaseBdev1", 00:28:37.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.376 "is_configured": false, 00:28:37.376 "data_offset": 0, 00:28:37.376 "data_size": 0 00:28:37.376 }, 00:28:37.376 { 00:28:37.376 "name": null, 00:28:37.376 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:37.376 "is_configured": false, 00:28:37.376 "data_offset": 2048, 00:28:37.376 "data_size": 63488 00:28:37.376 }, 00:28:37.376 { 00:28:37.376 "name": "BaseBdev3", 00:28:37.376 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:37.376 "is_configured": true, 00:28:37.376 "data_offset": 2048, 00:28:37.376 "data_size": 63488 00:28:37.376 }, 00:28:37.376 { 00:28:37.376 "name": "BaseBdev4", 00:28:37.376 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:37.376 "is_configured": true, 00:28:37.376 "data_offset": 2048, 00:28:37.376 "data_size": 63488 00:28:37.376 } 00:28:37.376 ] 00:28:37.376 }' 00:28:37.376 11:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:37.376 11:39:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:38.310 11:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.310 11:39:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:38.310 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:28:38.310 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:38.567 [2024-07-25 11:39:54.390509] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:38.567 BaseBdev1 00:28:38.567 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:28:38.567 11:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:38.567 11:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:38.567 11:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:38.567 11:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:38.567 11:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:38.567 11:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:38.825 11:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:39.084 [ 00:28:39.084 { 00:28:39.084 "name": "BaseBdev1", 00:28:39.084 "aliases": [ 00:28:39.084 "84f1da15-b683-4272-b097-d6d926e51645" 00:28:39.084 ], 00:28:39.084 "product_name": "Malloc disk", 00:28:39.084 "block_size": 512, 00:28:39.084 "num_blocks": 65536, 00:28:39.084 "uuid": "84f1da15-b683-4272-b097-d6d926e51645", 00:28:39.084 "assigned_rate_limits": { 00:28:39.084 "rw_ios_per_sec": 0, 00:28:39.084 "rw_mbytes_per_sec": 0, 00:28:39.084 "r_mbytes_per_sec": 0, 00:28:39.084 "w_mbytes_per_sec": 0 00:28:39.084 }, 00:28:39.084 "claimed": true, 00:28:39.084 "claim_type": "exclusive_write", 00:28:39.084 "zoned": false, 00:28:39.084 "supported_io_types": { 00:28:39.084 "read": true, 00:28:39.084 "write": true, 00:28:39.084 "unmap": true, 00:28:39.084 "flush": true, 00:28:39.084 "reset": true, 00:28:39.084 "nvme_admin": false, 00:28:39.084 "nvme_io": false, 00:28:39.084 "nvme_io_md": false, 00:28:39.084 "write_zeroes": true, 00:28:39.084 "zcopy": true, 00:28:39.084 "get_zone_info": false, 00:28:39.084 "zone_management": false, 00:28:39.084 "zone_append": false, 00:28:39.084 "compare": false, 00:28:39.084 "compare_and_write": false, 00:28:39.084 "abort": true, 00:28:39.084 "seek_hole": false, 00:28:39.084 "seek_data": false, 00:28:39.084 "copy": true, 00:28:39.084 "nvme_iov_md": false 00:28:39.084 }, 00:28:39.084 "memory_domains": [ 00:28:39.084 { 00:28:39.084 "dma_device_id": "system", 00:28:39.084 "dma_device_type": 1 00:28:39.084 }, 00:28:39.084 { 00:28:39.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:39.084 "dma_device_type": 2 00:28:39.084 } 00:28:39.084 ], 00:28:39.084 "driver_specific": {} 00:28:39.084 } 00:28:39.084 ] 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.084 11:39:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:39.342 11:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:39.342 "name": "Existed_Raid", 00:28:39.342 "uuid": "682fa63e-f9f3-4e1f-bff1-ada8a077e50b", 00:28:39.342 "strip_size_kb": 64, 00:28:39.342 "state": "configuring", 00:28:39.342 "raid_level": "raid5f", 00:28:39.342 "superblock": true, 00:28:39.342 "num_base_bdevs": 4, 00:28:39.342 "num_base_bdevs_discovered": 3, 00:28:39.342 "num_base_bdevs_operational": 4, 00:28:39.342 "base_bdevs_list": [ 00:28:39.342 { 00:28:39.342 "name": "BaseBdev1", 00:28:39.342 "uuid": "84f1da15-b683-4272-b097-d6d926e51645", 00:28:39.342 "is_configured": true, 00:28:39.342 "data_offset": 2048, 00:28:39.342 "data_size": 63488 00:28:39.342 }, 00:28:39.342 { 00:28:39.343 "name": null, 00:28:39.343 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:39.343 "is_configured": false, 00:28:39.343 "data_offset": 2048, 00:28:39.343 "data_size": 63488 00:28:39.343 }, 00:28:39.343 { 00:28:39.343 "name": "BaseBdev3", 00:28:39.343 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:39.343 "is_configured": true, 00:28:39.343 "data_offset": 2048, 00:28:39.343 "data_size": 63488 00:28:39.343 }, 00:28:39.343 { 00:28:39.343 "name": "BaseBdev4", 00:28:39.343 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:39.343 "is_configured": true, 00:28:39.343 "data_offset": 2048, 00:28:39.343 "data_size": 63488 00:28:39.343 } 00:28:39.343 ] 00:28:39.343 }' 00:28:39.343 11:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:39.343 11:39:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:39.909 11:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:39.909 11:39:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.167 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:28:40.167 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:28:40.426 [2024-07-25 11:39:56.279234] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:40.426 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:40.426 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:40.426 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:40.426 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:40.426 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:40.426 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:40.426 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:40.426 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:40.426 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:40.426 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:40.685 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.685 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:40.685 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:40.685 "name": "Existed_Raid", 00:28:40.685 "uuid": "682fa63e-f9f3-4e1f-bff1-ada8a077e50b", 00:28:40.685 "strip_size_kb": 64, 00:28:40.685 "state": "configuring", 00:28:40.685 "raid_level": "raid5f", 00:28:40.685 "superblock": true, 00:28:40.685 "num_base_bdevs": 4, 00:28:40.685 "num_base_bdevs_discovered": 2, 00:28:40.685 "num_base_bdevs_operational": 4, 00:28:40.685 "base_bdevs_list": [ 00:28:40.685 { 00:28:40.685 "name": "BaseBdev1", 00:28:40.685 "uuid": "84f1da15-b683-4272-b097-d6d926e51645", 00:28:40.685 "is_configured": true, 00:28:40.685 "data_offset": 2048, 00:28:40.685 "data_size": 63488 00:28:40.685 }, 00:28:40.685 { 00:28:40.685 "name": null, 00:28:40.685 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:40.685 "is_configured": false, 00:28:40.685 "data_offset": 2048, 00:28:40.685 "data_size": 63488 00:28:40.685 }, 00:28:40.685 { 00:28:40.685 "name": null, 00:28:40.685 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:40.685 "is_configured": false, 00:28:40.685 "data_offset": 2048, 00:28:40.685 "data_size": 63488 00:28:40.685 }, 00:28:40.685 { 00:28:40.685 "name": "BaseBdev4", 00:28:40.685 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:40.685 "is_configured": true, 00:28:40.685 "data_offset": 2048, 00:28:40.685 "data_size": 63488 00:28:40.685 } 00:28:40.685 ] 00:28:40.685 }' 00:28:40.685 11:39:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:40.685 11:39:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:41.621 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:41.621 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.621 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:28:41.621 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:41.879 [2024-07-25 11:39:57.735596] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.879 11:39:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:42.446 11:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:42.446 "name": "Existed_Raid", 00:28:42.446 "uuid": "682fa63e-f9f3-4e1f-bff1-ada8a077e50b", 00:28:42.446 "strip_size_kb": 64, 00:28:42.446 "state": "configuring", 00:28:42.446 "raid_level": "raid5f", 00:28:42.446 "superblock": true, 00:28:42.446 "num_base_bdevs": 4, 00:28:42.446 "num_base_bdevs_discovered": 3, 00:28:42.446 "num_base_bdevs_operational": 4, 00:28:42.446 "base_bdevs_list": [ 00:28:42.446 { 00:28:42.446 "name": "BaseBdev1", 00:28:42.446 "uuid": "84f1da15-b683-4272-b097-d6d926e51645", 00:28:42.446 "is_configured": true, 00:28:42.446 "data_offset": 2048, 00:28:42.446 "data_size": 63488 00:28:42.446 }, 00:28:42.446 { 00:28:42.446 "name": null, 00:28:42.446 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:42.446 "is_configured": false, 00:28:42.446 "data_offset": 2048, 00:28:42.446 "data_size": 63488 00:28:42.446 }, 00:28:42.446 { 00:28:42.446 "name": "BaseBdev3", 00:28:42.446 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:42.446 "is_configured": true, 00:28:42.446 "data_offset": 2048, 00:28:42.446 "data_size": 63488 00:28:42.446 }, 00:28:42.446 { 00:28:42.446 "name": "BaseBdev4", 00:28:42.446 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:42.446 "is_configured": true, 00:28:42.446 "data_offset": 2048, 00:28:42.446 "data_size": 63488 00:28:42.446 } 00:28:42.446 ] 00:28:42.446 }' 00:28:42.446 11:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:42.446 11:39:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:43.012 11:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.012 11:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:43.270 11:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:28:43.270 11:39:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:43.270 [2024-07-25 11:39:59.108129] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:43.534 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.792 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:43.792 "name": "Existed_Raid", 00:28:43.792 "uuid": "682fa63e-f9f3-4e1f-bff1-ada8a077e50b", 00:28:43.792 "strip_size_kb": 64, 00:28:43.792 "state": "configuring", 00:28:43.792 "raid_level": "raid5f", 00:28:43.792 "superblock": true, 00:28:43.792 "num_base_bdevs": 4, 00:28:43.792 "num_base_bdevs_discovered": 2, 00:28:43.792 "num_base_bdevs_operational": 4, 00:28:43.792 "base_bdevs_list": [ 00:28:43.792 { 00:28:43.792 "name": null, 00:28:43.792 "uuid": "84f1da15-b683-4272-b097-d6d926e51645", 00:28:43.792 "is_configured": false, 00:28:43.792 "data_offset": 2048, 00:28:43.792 "data_size": 63488 00:28:43.792 }, 00:28:43.792 { 00:28:43.792 "name": null, 00:28:43.792 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:43.792 "is_configured": false, 00:28:43.792 "data_offset": 2048, 00:28:43.792 "data_size": 63488 00:28:43.792 }, 00:28:43.792 { 00:28:43.792 "name": "BaseBdev3", 00:28:43.792 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:43.792 "is_configured": true, 00:28:43.792 "data_offset": 2048, 00:28:43.792 "data_size": 63488 00:28:43.792 }, 00:28:43.792 { 00:28:43.792 "name": "BaseBdev4", 00:28:43.792 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:43.792 "is_configured": true, 00:28:43.792 "data_offset": 2048, 00:28:43.792 "data_size": 63488 00:28:43.792 } 00:28:43.792 ] 00:28:43.792 }' 00:28:43.792 11:39:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:43.792 11:39:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:44.358 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:44.358 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.615 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:28:44.615 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:44.873 [2024-07-25 11:40:00.632382] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.873 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:45.140 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.140 "name": "Existed_Raid", 00:28:45.140 "uuid": "682fa63e-f9f3-4e1f-bff1-ada8a077e50b", 00:28:45.140 "strip_size_kb": 64, 00:28:45.140 "state": "configuring", 00:28:45.140 "raid_level": "raid5f", 00:28:45.140 "superblock": true, 00:28:45.140 "num_base_bdevs": 4, 00:28:45.140 "num_base_bdevs_discovered": 3, 00:28:45.140 "num_base_bdevs_operational": 4, 00:28:45.140 "base_bdevs_list": [ 00:28:45.140 { 00:28:45.140 "name": null, 00:28:45.140 "uuid": "84f1da15-b683-4272-b097-d6d926e51645", 00:28:45.140 "is_configured": false, 00:28:45.140 "data_offset": 2048, 00:28:45.140 "data_size": 63488 00:28:45.140 }, 00:28:45.140 { 00:28:45.140 "name": "BaseBdev2", 00:28:45.140 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:45.140 "is_configured": true, 00:28:45.140 "data_offset": 2048, 00:28:45.140 "data_size": 63488 00:28:45.140 }, 00:28:45.140 { 00:28:45.140 "name": "BaseBdev3", 00:28:45.140 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:45.140 "is_configured": true, 00:28:45.140 "data_offset": 2048, 00:28:45.140 "data_size": 63488 00:28:45.140 }, 00:28:45.140 { 00:28:45.140 "name": "BaseBdev4", 00:28:45.140 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:45.140 "is_configured": true, 00:28:45.140 "data_offset": 2048, 00:28:45.140 "data_size": 63488 00:28:45.140 } 00:28:45.140 ] 00:28:45.140 }' 00:28:45.140 11:40:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.140 11:40:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:46.075 11:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.075 11:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:46.075 11:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:28:46.075 11:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.075 11:40:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:46.334 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 84f1da15-b683-4272-b097-d6d926e51645 00:28:46.592 [2024-07-25 11:40:02.369248] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:46.592 [2024-07-25 11:40:02.369567] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:46.592 [2024-07-25 11:40:02.369596] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:46.592 [2024-07-25 11:40:02.369919] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:46.592 NewBaseBdev 00:28:46.592 [2024-07-25 11:40:02.376267] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:46.592 [2024-07-25 11:40:02.376424] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:28:46.592 [2024-07-25 11:40:02.376821] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:46.592 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:28:46.592 11:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:28:46.592 11:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:46.592 11:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:46.592 11:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:46.592 11:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:46.592 11:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:46.850 11:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:47.108 [ 00:28:47.108 { 00:28:47.108 "name": "NewBaseBdev", 00:28:47.108 "aliases": [ 00:28:47.108 "84f1da15-b683-4272-b097-d6d926e51645" 00:28:47.108 ], 00:28:47.108 "product_name": "Malloc disk", 00:28:47.108 "block_size": 512, 00:28:47.108 "num_blocks": 65536, 00:28:47.108 "uuid": "84f1da15-b683-4272-b097-d6d926e51645", 00:28:47.108 "assigned_rate_limits": { 00:28:47.108 "rw_ios_per_sec": 0, 00:28:47.109 "rw_mbytes_per_sec": 0, 00:28:47.109 "r_mbytes_per_sec": 0, 00:28:47.109 "w_mbytes_per_sec": 0 00:28:47.109 }, 00:28:47.109 "claimed": true, 00:28:47.109 "claim_type": "exclusive_write", 00:28:47.109 "zoned": false, 00:28:47.109 "supported_io_types": { 00:28:47.109 "read": true, 00:28:47.109 "write": true, 00:28:47.109 "unmap": true, 00:28:47.109 "flush": true, 00:28:47.109 "reset": true, 00:28:47.109 "nvme_admin": false, 00:28:47.109 "nvme_io": false, 00:28:47.109 "nvme_io_md": false, 00:28:47.109 "write_zeroes": true, 00:28:47.109 "zcopy": true, 00:28:47.109 "get_zone_info": false, 00:28:47.109 "zone_management": false, 00:28:47.109 "zone_append": false, 00:28:47.109 "compare": false, 00:28:47.109 "compare_and_write": false, 00:28:47.109 "abort": true, 00:28:47.109 "seek_hole": false, 00:28:47.109 "seek_data": false, 00:28:47.109 "copy": true, 00:28:47.109 "nvme_iov_md": false 00:28:47.109 }, 00:28:47.109 "memory_domains": [ 00:28:47.109 { 00:28:47.109 "dma_device_id": "system", 00:28:47.109 "dma_device_type": 1 00:28:47.109 }, 00:28:47.109 { 00:28:47.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:47.109 "dma_device_type": 2 00:28:47.109 } 00:28:47.109 ], 00:28:47.109 "driver_specific": {} 00:28:47.109 } 00:28:47.109 ] 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.109 11:40:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:47.368 11:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.368 "name": "Existed_Raid", 00:28:47.368 "uuid": "682fa63e-f9f3-4e1f-bff1-ada8a077e50b", 00:28:47.368 "strip_size_kb": 64, 00:28:47.368 "state": "online", 00:28:47.368 "raid_level": "raid5f", 00:28:47.368 "superblock": true, 00:28:47.368 "num_base_bdevs": 4, 00:28:47.368 "num_base_bdevs_discovered": 4, 00:28:47.368 "num_base_bdevs_operational": 4, 00:28:47.368 "base_bdevs_list": [ 00:28:47.368 { 00:28:47.368 "name": "NewBaseBdev", 00:28:47.368 "uuid": "84f1da15-b683-4272-b097-d6d926e51645", 00:28:47.368 "is_configured": true, 00:28:47.368 "data_offset": 2048, 00:28:47.368 "data_size": 63488 00:28:47.368 }, 00:28:47.368 { 00:28:47.368 "name": "BaseBdev2", 00:28:47.368 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:47.368 "is_configured": true, 00:28:47.368 "data_offset": 2048, 00:28:47.368 "data_size": 63488 00:28:47.368 }, 00:28:47.368 { 00:28:47.368 "name": "BaseBdev3", 00:28:47.368 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:47.368 "is_configured": true, 00:28:47.368 "data_offset": 2048, 00:28:47.368 "data_size": 63488 00:28:47.368 }, 00:28:47.368 { 00:28:47.368 "name": "BaseBdev4", 00:28:47.368 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:47.368 "is_configured": true, 00:28:47.368 "data_offset": 2048, 00:28:47.368 "data_size": 63488 00:28:47.368 } 00:28:47.368 ] 00:28:47.368 }' 00:28:47.368 11:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.368 11:40:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:47.935 11:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:28:47.935 11:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:28:47.935 11:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:47.935 11:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:47.935 11:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:47.935 11:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:28:47.935 11:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:28:47.936 11:40:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:48.194 [2024-07-25 11:40:04.001020] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:48.194 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:48.194 "name": "Existed_Raid", 00:28:48.194 "aliases": [ 00:28:48.194 "682fa63e-f9f3-4e1f-bff1-ada8a077e50b" 00:28:48.194 ], 00:28:48.194 "product_name": "Raid Volume", 00:28:48.194 "block_size": 512, 00:28:48.194 "num_blocks": 190464, 00:28:48.194 "uuid": "682fa63e-f9f3-4e1f-bff1-ada8a077e50b", 00:28:48.194 "assigned_rate_limits": { 00:28:48.194 "rw_ios_per_sec": 0, 00:28:48.194 "rw_mbytes_per_sec": 0, 00:28:48.194 "r_mbytes_per_sec": 0, 00:28:48.194 "w_mbytes_per_sec": 0 00:28:48.194 }, 00:28:48.194 "claimed": false, 00:28:48.194 "zoned": false, 00:28:48.194 "supported_io_types": { 00:28:48.194 "read": true, 00:28:48.194 "write": true, 00:28:48.194 "unmap": false, 00:28:48.194 "flush": false, 00:28:48.194 "reset": true, 00:28:48.194 "nvme_admin": false, 00:28:48.194 "nvme_io": false, 00:28:48.194 "nvme_io_md": false, 00:28:48.194 "write_zeroes": true, 00:28:48.194 "zcopy": false, 00:28:48.194 "get_zone_info": false, 00:28:48.194 "zone_management": false, 00:28:48.194 "zone_append": false, 00:28:48.194 "compare": false, 00:28:48.194 "compare_and_write": false, 00:28:48.194 "abort": false, 00:28:48.194 "seek_hole": false, 00:28:48.194 "seek_data": false, 00:28:48.194 "copy": false, 00:28:48.194 "nvme_iov_md": false 00:28:48.194 }, 00:28:48.194 "driver_specific": { 00:28:48.194 "raid": { 00:28:48.194 "uuid": "682fa63e-f9f3-4e1f-bff1-ada8a077e50b", 00:28:48.194 "strip_size_kb": 64, 00:28:48.194 "state": "online", 00:28:48.194 "raid_level": "raid5f", 00:28:48.194 "superblock": true, 00:28:48.194 "num_base_bdevs": 4, 00:28:48.194 "num_base_bdevs_discovered": 4, 00:28:48.194 "num_base_bdevs_operational": 4, 00:28:48.194 "base_bdevs_list": [ 00:28:48.195 { 00:28:48.195 "name": "NewBaseBdev", 00:28:48.195 "uuid": "84f1da15-b683-4272-b097-d6d926e51645", 00:28:48.195 "is_configured": true, 00:28:48.195 "data_offset": 2048, 00:28:48.195 "data_size": 63488 00:28:48.195 }, 00:28:48.195 { 00:28:48.195 "name": "BaseBdev2", 00:28:48.195 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:48.195 "is_configured": true, 00:28:48.195 "data_offset": 2048, 00:28:48.195 "data_size": 63488 00:28:48.195 }, 00:28:48.195 { 00:28:48.195 "name": "BaseBdev3", 00:28:48.195 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:48.195 "is_configured": true, 00:28:48.195 "data_offset": 2048, 00:28:48.195 "data_size": 63488 00:28:48.195 }, 00:28:48.195 { 00:28:48.195 "name": "BaseBdev4", 00:28:48.195 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:48.195 "is_configured": true, 00:28:48.195 "data_offset": 2048, 00:28:48.195 "data_size": 63488 00:28:48.195 } 00:28:48.195 ] 00:28:48.195 } 00:28:48.195 } 00:28:48.195 }' 00:28:48.195 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:48.195 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:28:48.195 BaseBdev2 00:28:48.195 BaseBdev3 00:28:48.195 BaseBdev4' 00:28:48.195 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:48.195 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:28:48.195 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:48.453 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:48.453 "name": "NewBaseBdev", 00:28:48.453 "aliases": [ 00:28:48.453 "84f1da15-b683-4272-b097-d6d926e51645" 00:28:48.453 ], 00:28:48.453 "product_name": "Malloc disk", 00:28:48.453 "block_size": 512, 00:28:48.453 "num_blocks": 65536, 00:28:48.453 "uuid": "84f1da15-b683-4272-b097-d6d926e51645", 00:28:48.453 "assigned_rate_limits": { 00:28:48.453 "rw_ios_per_sec": 0, 00:28:48.453 "rw_mbytes_per_sec": 0, 00:28:48.453 "r_mbytes_per_sec": 0, 00:28:48.453 "w_mbytes_per_sec": 0 00:28:48.453 }, 00:28:48.453 "claimed": true, 00:28:48.453 "claim_type": "exclusive_write", 00:28:48.453 "zoned": false, 00:28:48.453 "supported_io_types": { 00:28:48.453 "read": true, 00:28:48.453 "write": true, 00:28:48.453 "unmap": true, 00:28:48.453 "flush": true, 00:28:48.453 "reset": true, 00:28:48.453 "nvme_admin": false, 00:28:48.453 "nvme_io": false, 00:28:48.453 "nvme_io_md": false, 00:28:48.453 "write_zeroes": true, 00:28:48.453 "zcopy": true, 00:28:48.453 "get_zone_info": false, 00:28:48.453 "zone_management": false, 00:28:48.453 "zone_append": false, 00:28:48.453 "compare": false, 00:28:48.453 "compare_and_write": false, 00:28:48.453 "abort": true, 00:28:48.453 "seek_hole": false, 00:28:48.454 "seek_data": false, 00:28:48.454 "copy": true, 00:28:48.454 "nvme_iov_md": false 00:28:48.454 }, 00:28:48.454 "memory_domains": [ 00:28:48.454 { 00:28:48.454 "dma_device_id": "system", 00:28:48.454 "dma_device_type": 1 00:28:48.454 }, 00:28:48.454 { 00:28:48.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:48.454 "dma_device_type": 2 00:28:48.454 } 00:28:48.454 ], 00:28:48.454 "driver_specific": {} 00:28:48.454 }' 00:28:48.454 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:48.712 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:48.712 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:48.712 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:48.712 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:48.712 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:48.712 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:48.712 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:48.970 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:48.970 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:48.970 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:48.970 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:48.970 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:48.970 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:28:48.970 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:49.228 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:49.228 "name": "BaseBdev2", 00:28:49.228 "aliases": [ 00:28:49.228 "075685f9-d466-4784-8864-ac699b84a4c1" 00:28:49.228 ], 00:28:49.228 "product_name": "Malloc disk", 00:28:49.228 "block_size": 512, 00:28:49.228 "num_blocks": 65536, 00:28:49.228 "uuid": "075685f9-d466-4784-8864-ac699b84a4c1", 00:28:49.228 "assigned_rate_limits": { 00:28:49.228 "rw_ios_per_sec": 0, 00:28:49.228 "rw_mbytes_per_sec": 0, 00:28:49.228 "r_mbytes_per_sec": 0, 00:28:49.228 "w_mbytes_per_sec": 0 00:28:49.228 }, 00:28:49.228 "claimed": true, 00:28:49.228 "claim_type": "exclusive_write", 00:28:49.228 "zoned": false, 00:28:49.228 "supported_io_types": { 00:28:49.228 "read": true, 00:28:49.228 "write": true, 00:28:49.228 "unmap": true, 00:28:49.228 "flush": true, 00:28:49.228 "reset": true, 00:28:49.228 "nvme_admin": false, 00:28:49.228 "nvme_io": false, 00:28:49.228 "nvme_io_md": false, 00:28:49.228 "write_zeroes": true, 00:28:49.228 "zcopy": true, 00:28:49.229 "get_zone_info": false, 00:28:49.229 "zone_management": false, 00:28:49.229 "zone_append": false, 00:28:49.229 "compare": false, 00:28:49.229 "compare_and_write": false, 00:28:49.229 "abort": true, 00:28:49.229 "seek_hole": false, 00:28:49.229 "seek_data": false, 00:28:49.229 "copy": true, 00:28:49.229 "nvme_iov_md": false 00:28:49.229 }, 00:28:49.229 "memory_domains": [ 00:28:49.229 { 00:28:49.229 "dma_device_id": "system", 00:28:49.229 "dma_device_type": 1 00:28:49.229 }, 00:28:49.229 { 00:28:49.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:49.229 "dma_device_type": 2 00:28:49.229 } 00:28:49.229 ], 00:28:49.229 "driver_specific": {} 00:28:49.229 }' 00:28:49.229 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:49.229 11:40:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:49.229 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:49.229 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:49.229 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:49.488 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:49.488 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:49.488 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:49.488 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:49.488 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:49.488 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:49.488 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:49.488 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:49.488 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:28:49.488 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:49.747 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:49.747 "name": "BaseBdev3", 00:28:49.747 "aliases": [ 00:28:49.747 "d3568e85-7460-4011-8ccd-b648612cbb7e" 00:28:49.747 ], 00:28:49.747 "product_name": "Malloc disk", 00:28:49.747 "block_size": 512, 00:28:49.747 "num_blocks": 65536, 00:28:49.747 "uuid": "d3568e85-7460-4011-8ccd-b648612cbb7e", 00:28:49.747 "assigned_rate_limits": { 00:28:49.747 "rw_ios_per_sec": 0, 00:28:49.747 "rw_mbytes_per_sec": 0, 00:28:49.747 "r_mbytes_per_sec": 0, 00:28:49.747 "w_mbytes_per_sec": 0 00:28:49.747 }, 00:28:49.747 "claimed": true, 00:28:49.747 "claim_type": "exclusive_write", 00:28:49.747 "zoned": false, 00:28:49.747 "supported_io_types": { 00:28:49.747 "read": true, 00:28:49.747 "write": true, 00:28:49.747 "unmap": true, 00:28:49.747 "flush": true, 00:28:49.747 "reset": true, 00:28:49.747 "nvme_admin": false, 00:28:49.747 "nvme_io": false, 00:28:49.747 "nvme_io_md": false, 00:28:49.747 "write_zeroes": true, 00:28:49.747 "zcopy": true, 00:28:49.747 "get_zone_info": false, 00:28:49.747 "zone_management": false, 00:28:49.747 "zone_append": false, 00:28:49.747 "compare": false, 00:28:49.747 "compare_and_write": false, 00:28:49.747 "abort": true, 00:28:49.747 "seek_hole": false, 00:28:49.747 "seek_data": false, 00:28:49.747 "copy": true, 00:28:49.747 "nvme_iov_md": false 00:28:49.747 }, 00:28:49.747 "memory_domains": [ 00:28:49.747 { 00:28:49.748 "dma_device_id": "system", 00:28:49.748 "dma_device_type": 1 00:28:49.748 }, 00:28:49.748 { 00:28:49.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:49.748 "dma_device_type": 2 00:28:49.748 } 00:28:49.748 ], 00:28:49.748 "driver_specific": {} 00:28:49.748 }' 00:28:49.748 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:50.006 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:50.006 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:50.006 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:50.006 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:50.006 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:50.006 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:50.006 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:50.264 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:50.264 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:50.264 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:50.264 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:50.264 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:50.264 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:28:50.264 11:40:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:50.561 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:50.561 "name": "BaseBdev4", 00:28:50.561 "aliases": [ 00:28:50.561 "70903e66-89cb-47f9-98ea-d211aba58a53" 00:28:50.561 ], 00:28:50.561 "product_name": "Malloc disk", 00:28:50.561 "block_size": 512, 00:28:50.561 "num_blocks": 65536, 00:28:50.561 "uuid": "70903e66-89cb-47f9-98ea-d211aba58a53", 00:28:50.561 "assigned_rate_limits": { 00:28:50.561 "rw_ios_per_sec": 0, 00:28:50.561 "rw_mbytes_per_sec": 0, 00:28:50.561 "r_mbytes_per_sec": 0, 00:28:50.561 "w_mbytes_per_sec": 0 00:28:50.561 }, 00:28:50.561 "claimed": true, 00:28:50.561 "claim_type": "exclusive_write", 00:28:50.561 "zoned": false, 00:28:50.561 "supported_io_types": { 00:28:50.561 "read": true, 00:28:50.561 "write": true, 00:28:50.561 "unmap": true, 00:28:50.561 "flush": true, 00:28:50.561 "reset": true, 00:28:50.561 "nvme_admin": false, 00:28:50.561 "nvme_io": false, 00:28:50.561 "nvme_io_md": false, 00:28:50.561 "write_zeroes": true, 00:28:50.561 "zcopy": true, 00:28:50.561 "get_zone_info": false, 00:28:50.561 "zone_management": false, 00:28:50.561 "zone_append": false, 00:28:50.561 "compare": false, 00:28:50.561 "compare_and_write": false, 00:28:50.561 "abort": true, 00:28:50.561 "seek_hole": false, 00:28:50.561 "seek_data": false, 00:28:50.561 "copy": true, 00:28:50.561 "nvme_iov_md": false 00:28:50.561 }, 00:28:50.561 "memory_domains": [ 00:28:50.561 { 00:28:50.561 "dma_device_id": "system", 00:28:50.561 "dma_device_type": 1 00:28:50.561 }, 00:28:50.561 { 00:28:50.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:50.561 "dma_device_type": 2 00:28:50.561 } 00:28:50.561 ], 00:28:50.561 "driver_specific": {} 00:28:50.561 }' 00:28:50.561 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:50.561 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:50.561 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:50.561 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:50.561 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:50.819 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:50.819 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:50.819 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:50.819 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:50.819 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:50.819 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:50.819 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:50.819 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:51.076 [2024-07-25 11:40:06.869531] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:51.076 [2024-07-25 11:40:06.869571] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:51.076 [2024-07-25 11:40:06.869691] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:51.076 [2024-07-25 11:40:06.870047] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:51.076 [2024-07-25 11:40:06.870078] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 96516 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 96516 ']' 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 96516 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96516 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96516' 00:28:51.076 killing process with pid 96516 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 96516 00:28:51.076 [2024-07-25 11:40:06.913779] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:51.076 11:40:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 96516 00:28:51.642 [2024-07-25 11:40:07.261054] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:52.576 11:40:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:28:52.576 ************************************ 00:28:52.576 END TEST raid5f_state_function_test_sb 00:28:52.576 ************************************ 00:28:52.576 00:28:52.576 real 0m36.685s 00:28:52.576 user 1m7.388s 00:28:52.576 sys 0m4.714s 00:28:52.576 11:40:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:52.576 11:40:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.834 11:40:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:28:52.834 11:40:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:52.834 11:40:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:52.834 11:40:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:52.834 ************************************ 00:28:52.834 START TEST raid5f_superblock_test 00:28:52.834 ************************************ 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid5f 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid5f '!=' raid1 ']' 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=97585 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 97585 /var/tmp/spdk-raid.sock 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 97585 ']' 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:52.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:52.834 11:40:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:52.835 [2024-07-25 11:40:08.596436] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:28:52.835 [2024-07-25 11:40:08.596661] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97585 ] 00:28:53.093 [2024-07-25 11:40:08.773095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.351 [2024-07-25 11:40:09.045338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.609 [2024-07-25 11:40:09.248356] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:53.609 [2024-07-25 11:40:09.248401] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:53.609 11:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:28:54.177 malloc1 00:28:54.177 11:40:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:54.435 [2024-07-25 11:40:10.114307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:54.435 [2024-07-25 11:40:10.114601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.435 [2024-07-25 11:40:10.114702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:54.435 [2024-07-25 11:40:10.114986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.435 [2024-07-25 11:40:10.117888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.435 [2024-07-25 11:40:10.118068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:54.435 pt1 00:28:54.435 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:54.435 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:54.435 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:28:54.435 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:28:54.435 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:54.435 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:54.435 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:54.435 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:54.435 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:28:54.693 malloc2 00:28:54.693 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:54.951 [2024-07-25 11:40:10.692799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:54.951 [2024-07-25 11:40:10.692898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.951 [2024-07-25 11:40:10.692929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:54.951 [2024-07-25 11:40:10.692965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.951 [2024-07-25 11:40:10.695929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.951 [2024-07-25 11:40:10.695979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:54.951 pt2 00:28:54.951 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:54.951 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:54.951 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:28:54.951 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:28:54.951 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:54.951 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:54.951 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:54.952 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:54.952 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:28:55.210 malloc3 00:28:55.210 11:40:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:55.468 [2024-07-25 11:40:11.213912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:55.468 [2024-07-25 11:40:11.214004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:55.468 [2024-07-25 11:40:11.214037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:55.468 [2024-07-25 11:40:11.214059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:55.468 [2024-07-25 11:40:11.216972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:55.468 [2024-07-25 11:40:11.217027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:55.468 pt3 00:28:55.468 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:55.468 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:55.468 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:28:55.468 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:28:55.468 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:28:55.468 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:55.468 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:55.468 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:55.468 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:28:55.754 malloc4 00:28:55.754 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:56.013 [2024-07-25 11:40:11.792230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:56.013 [2024-07-25 11:40:11.792580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:56.013 [2024-07-25 11:40:11.792693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:56.013 [2024-07-25 11:40:11.792978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:56.013 [2024-07-25 11:40:11.795813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:56.013 [2024-07-25 11:40:11.795985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:56.013 pt4 00:28:56.013 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:56.013 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:56.013 11:40:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:28:56.270 [2024-07-25 11:40:12.144399] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:56.270 [2024-07-25 11:40:12.146907] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:56.270 [2024-07-25 11:40:12.147012] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:56.270 [2024-07-25 11:40:12.147086] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:56.270 [2024-07-25 11:40:12.147379] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:56.270 [2024-07-25 11:40:12.147404] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:56.270 [2024-07-25 11:40:12.147833] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:56.528 [2024-07-25 11:40:12.154869] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:56.528 [2024-07-25 11:40:12.155046] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:56.528 [2024-07-25 11:40:12.155458] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.528 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.786 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:56.786 "name": "raid_bdev1", 00:28:56.786 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:28:56.786 "strip_size_kb": 64, 00:28:56.786 "state": "online", 00:28:56.786 "raid_level": "raid5f", 00:28:56.786 "superblock": true, 00:28:56.786 "num_base_bdevs": 4, 00:28:56.786 "num_base_bdevs_discovered": 4, 00:28:56.786 "num_base_bdevs_operational": 4, 00:28:56.786 "base_bdevs_list": [ 00:28:56.786 { 00:28:56.786 "name": "pt1", 00:28:56.786 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:56.786 "is_configured": true, 00:28:56.786 "data_offset": 2048, 00:28:56.786 "data_size": 63488 00:28:56.786 }, 00:28:56.786 { 00:28:56.786 "name": "pt2", 00:28:56.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:56.786 "is_configured": true, 00:28:56.786 "data_offset": 2048, 00:28:56.786 "data_size": 63488 00:28:56.786 }, 00:28:56.786 { 00:28:56.786 "name": "pt3", 00:28:56.786 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:56.786 "is_configured": true, 00:28:56.786 "data_offset": 2048, 00:28:56.786 "data_size": 63488 00:28:56.786 }, 00:28:56.786 { 00:28:56.786 "name": "pt4", 00:28:56.786 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:56.786 "is_configured": true, 00:28:56.786 "data_offset": 2048, 00:28:56.786 "data_size": 63488 00:28:56.786 } 00:28:56.786 ] 00:28:56.786 }' 00:28:56.786 11:40:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:56.786 11:40:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:57.351 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:28:57.351 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:57.351 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:57.351 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:57.351 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:57.351 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:28:57.351 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:57.351 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:57.608 [2024-07-25 11:40:13.443562] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:57.608 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:57.608 "name": "raid_bdev1", 00:28:57.608 "aliases": [ 00:28:57.608 "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b" 00:28:57.608 ], 00:28:57.608 "product_name": "Raid Volume", 00:28:57.608 "block_size": 512, 00:28:57.608 "num_blocks": 190464, 00:28:57.608 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:28:57.608 "assigned_rate_limits": { 00:28:57.608 "rw_ios_per_sec": 0, 00:28:57.608 "rw_mbytes_per_sec": 0, 00:28:57.608 "r_mbytes_per_sec": 0, 00:28:57.608 "w_mbytes_per_sec": 0 00:28:57.608 }, 00:28:57.608 "claimed": false, 00:28:57.608 "zoned": false, 00:28:57.608 "supported_io_types": { 00:28:57.608 "read": true, 00:28:57.608 "write": true, 00:28:57.608 "unmap": false, 00:28:57.608 "flush": false, 00:28:57.608 "reset": true, 00:28:57.608 "nvme_admin": false, 00:28:57.608 "nvme_io": false, 00:28:57.608 "nvme_io_md": false, 00:28:57.608 "write_zeroes": true, 00:28:57.608 "zcopy": false, 00:28:57.608 "get_zone_info": false, 00:28:57.608 "zone_management": false, 00:28:57.608 "zone_append": false, 00:28:57.608 "compare": false, 00:28:57.608 "compare_and_write": false, 00:28:57.608 "abort": false, 00:28:57.608 "seek_hole": false, 00:28:57.608 "seek_data": false, 00:28:57.608 "copy": false, 00:28:57.608 "nvme_iov_md": false 00:28:57.608 }, 00:28:57.608 "driver_specific": { 00:28:57.608 "raid": { 00:28:57.608 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:28:57.608 "strip_size_kb": 64, 00:28:57.608 "state": "online", 00:28:57.608 "raid_level": "raid5f", 00:28:57.608 "superblock": true, 00:28:57.608 "num_base_bdevs": 4, 00:28:57.608 "num_base_bdevs_discovered": 4, 00:28:57.608 "num_base_bdevs_operational": 4, 00:28:57.608 "base_bdevs_list": [ 00:28:57.608 { 00:28:57.608 "name": "pt1", 00:28:57.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:57.608 "is_configured": true, 00:28:57.608 "data_offset": 2048, 00:28:57.608 "data_size": 63488 00:28:57.608 }, 00:28:57.608 { 00:28:57.608 "name": "pt2", 00:28:57.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:57.608 "is_configured": true, 00:28:57.608 "data_offset": 2048, 00:28:57.608 "data_size": 63488 00:28:57.608 }, 00:28:57.608 { 00:28:57.608 "name": "pt3", 00:28:57.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:57.608 "is_configured": true, 00:28:57.608 "data_offset": 2048, 00:28:57.608 "data_size": 63488 00:28:57.608 }, 00:28:57.608 { 00:28:57.608 "name": "pt4", 00:28:57.608 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:57.608 "is_configured": true, 00:28:57.608 "data_offset": 2048, 00:28:57.608 "data_size": 63488 00:28:57.608 } 00:28:57.608 ] 00:28:57.608 } 00:28:57.608 } 00:28:57.608 }' 00:28:57.608 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:57.866 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:57.866 pt2 00:28:57.866 pt3 00:28:57.866 pt4' 00:28:57.866 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:57.866 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:57.866 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:57.866 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:57.866 "name": "pt1", 00:28:57.866 "aliases": [ 00:28:57.866 "00000000-0000-0000-0000-000000000001" 00:28:57.866 ], 00:28:57.866 "product_name": "passthru", 00:28:57.866 "block_size": 512, 00:28:57.866 "num_blocks": 65536, 00:28:57.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:57.866 "assigned_rate_limits": { 00:28:57.866 "rw_ios_per_sec": 0, 00:28:57.866 "rw_mbytes_per_sec": 0, 00:28:57.866 "r_mbytes_per_sec": 0, 00:28:57.866 "w_mbytes_per_sec": 0 00:28:57.866 }, 00:28:57.866 "claimed": true, 00:28:57.866 "claim_type": "exclusive_write", 00:28:57.866 "zoned": false, 00:28:57.866 "supported_io_types": { 00:28:57.866 "read": true, 00:28:57.866 "write": true, 00:28:57.866 "unmap": true, 00:28:57.866 "flush": true, 00:28:57.866 "reset": true, 00:28:57.866 "nvme_admin": false, 00:28:57.866 "nvme_io": false, 00:28:57.866 "nvme_io_md": false, 00:28:57.866 "write_zeroes": true, 00:28:57.866 "zcopy": true, 00:28:57.866 "get_zone_info": false, 00:28:57.866 "zone_management": false, 00:28:57.866 "zone_append": false, 00:28:57.866 "compare": false, 00:28:57.866 "compare_and_write": false, 00:28:57.866 "abort": true, 00:28:57.866 "seek_hole": false, 00:28:57.866 "seek_data": false, 00:28:57.866 "copy": true, 00:28:57.866 "nvme_iov_md": false 00:28:57.866 }, 00:28:57.866 "memory_domains": [ 00:28:57.866 { 00:28:57.866 "dma_device_id": "system", 00:28:57.866 "dma_device_type": 1 00:28:57.866 }, 00:28:57.866 { 00:28:57.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:57.866 "dma_device_type": 2 00:28:57.866 } 00:28:57.866 ], 00:28:57.866 "driver_specific": { 00:28:57.866 "passthru": { 00:28:57.867 "name": "pt1", 00:28:57.867 "base_bdev_name": "malloc1" 00:28:57.867 } 00:28:57.867 } 00:28:57.867 }' 00:28:57.867 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.125 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.125 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:58.125 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.125 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.125 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:58.125 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.125 11:40:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.383 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:58.383 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.383 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.383 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:58.383 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:58.383 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:58.383 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:58.642 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:58.642 "name": "pt2", 00:28:58.642 "aliases": [ 00:28:58.642 "00000000-0000-0000-0000-000000000002" 00:28:58.642 ], 00:28:58.642 "product_name": "passthru", 00:28:58.642 "block_size": 512, 00:28:58.642 "num_blocks": 65536, 00:28:58.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:58.642 "assigned_rate_limits": { 00:28:58.642 "rw_ios_per_sec": 0, 00:28:58.642 "rw_mbytes_per_sec": 0, 00:28:58.642 "r_mbytes_per_sec": 0, 00:28:58.642 "w_mbytes_per_sec": 0 00:28:58.642 }, 00:28:58.642 "claimed": true, 00:28:58.642 "claim_type": "exclusive_write", 00:28:58.642 "zoned": false, 00:28:58.642 "supported_io_types": { 00:28:58.642 "read": true, 00:28:58.642 "write": true, 00:28:58.642 "unmap": true, 00:28:58.642 "flush": true, 00:28:58.642 "reset": true, 00:28:58.642 "nvme_admin": false, 00:28:58.642 "nvme_io": false, 00:28:58.642 "nvme_io_md": false, 00:28:58.642 "write_zeroes": true, 00:28:58.642 "zcopy": true, 00:28:58.642 "get_zone_info": false, 00:28:58.642 "zone_management": false, 00:28:58.642 "zone_append": false, 00:28:58.642 "compare": false, 00:28:58.642 "compare_and_write": false, 00:28:58.642 "abort": true, 00:28:58.642 "seek_hole": false, 00:28:58.642 "seek_data": false, 00:28:58.642 "copy": true, 00:28:58.642 "nvme_iov_md": false 00:28:58.642 }, 00:28:58.642 "memory_domains": [ 00:28:58.642 { 00:28:58.642 "dma_device_id": "system", 00:28:58.642 "dma_device_type": 1 00:28:58.642 }, 00:28:58.642 { 00:28:58.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:58.642 "dma_device_type": 2 00:28:58.642 } 00:28:58.642 ], 00:28:58.642 "driver_specific": { 00:28:58.642 "passthru": { 00:28:58.642 "name": "pt2", 00:28:58.642 "base_bdev_name": "malloc2" 00:28:58.642 } 00:28:58.642 } 00:28:58.642 }' 00:28:58.642 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.642 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:58.900 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:58.900 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.900 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:58.900 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:58.900 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.900 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:58.900 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:58.900 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:58.900 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:59.158 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:59.158 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:59.158 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:28:59.158 11:40:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:59.416 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:59.416 "name": "pt3", 00:28:59.416 "aliases": [ 00:28:59.416 "00000000-0000-0000-0000-000000000003" 00:28:59.416 ], 00:28:59.416 "product_name": "passthru", 00:28:59.416 "block_size": 512, 00:28:59.416 "num_blocks": 65536, 00:28:59.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:59.416 "assigned_rate_limits": { 00:28:59.416 "rw_ios_per_sec": 0, 00:28:59.416 "rw_mbytes_per_sec": 0, 00:28:59.416 "r_mbytes_per_sec": 0, 00:28:59.416 "w_mbytes_per_sec": 0 00:28:59.416 }, 00:28:59.416 "claimed": true, 00:28:59.416 "claim_type": "exclusive_write", 00:28:59.416 "zoned": false, 00:28:59.416 "supported_io_types": { 00:28:59.416 "read": true, 00:28:59.416 "write": true, 00:28:59.416 "unmap": true, 00:28:59.416 "flush": true, 00:28:59.416 "reset": true, 00:28:59.416 "nvme_admin": false, 00:28:59.416 "nvme_io": false, 00:28:59.416 "nvme_io_md": false, 00:28:59.416 "write_zeroes": true, 00:28:59.416 "zcopy": true, 00:28:59.416 "get_zone_info": false, 00:28:59.416 "zone_management": false, 00:28:59.416 "zone_append": false, 00:28:59.416 "compare": false, 00:28:59.416 "compare_and_write": false, 00:28:59.416 "abort": true, 00:28:59.416 "seek_hole": false, 00:28:59.416 "seek_data": false, 00:28:59.416 "copy": true, 00:28:59.416 "nvme_iov_md": false 00:28:59.416 }, 00:28:59.416 "memory_domains": [ 00:28:59.416 { 00:28:59.416 "dma_device_id": "system", 00:28:59.416 "dma_device_type": 1 00:28:59.416 }, 00:28:59.416 { 00:28:59.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:59.416 "dma_device_type": 2 00:28:59.416 } 00:28:59.416 ], 00:28:59.416 "driver_specific": { 00:28:59.416 "passthru": { 00:28:59.416 "name": "pt3", 00:28:59.416 "base_bdev_name": "malloc3" 00:28:59.416 } 00:28:59.416 } 00:28:59.416 }' 00:28:59.416 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:59.416 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:59.416 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:28:59.416 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:59.416 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:59.416 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:28:59.416 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:59.416 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:59.674 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:28:59.674 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:59.674 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:59.674 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:28:59.674 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:59.674 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:28:59.674 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:59.932 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:59.932 "name": "pt4", 00:28:59.932 "aliases": [ 00:28:59.932 "00000000-0000-0000-0000-000000000004" 00:28:59.932 ], 00:28:59.932 "product_name": "passthru", 00:28:59.932 "block_size": 512, 00:28:59.932 "num_blocks": 65536, 00:28:59.932 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:59.932 "assigned_rate_limits": { 00:28:59.932 "rw_ios_per_sec": 0, 00:28:59.932 "rw_mbytes_per_sec": 0, 00:28:59.932 "r_mbytes_per_sec": 0, 00:28:59.932 "w_mbytes_per_sec": 0 00:28:59.932 }, 00:28:59.932 "claimed": true, 00:28:59.932 "claim_type": "exclusive_write", 00:28:59.932 "zoned": false, 00:28:59.932 "supported_io_types": { 00:28:59.932 "read": true, 00:28:59.932 "write": true, 00:28:59.932 "unmap": true, 00:28:59.932 "flush": true, 00:28:59.932 "reset": true, 00:28:59.932 "nvme_admin": false, 00:28:59.932 "nvme_io": false, 00:28:59.932 "nvme_io_md": false, 00:28:59.932 "write_zeroes": true, 00:28:59.933 "zcopy": true, 00:28:59.933 "get_zone_info": false, 00:28:59.933 "zone_management": false, 00:28:59.933 "zone_append": false, 00:28:59.933 "compare": false, 00:28:59.933 "compare_and_write": false, 00:28:59.933 "abort": true, 00:28:59.933 "seek_hole": false, 00:28:59.933 "seek_data": false, 00:28:59.933 "copy": true, 00:28:59.933 "nvme_iov_md": false 00:28:59.933 }, 00:28:59.933 "memory_domains": [ 00:28:59.933 { 00:28:59.933 "dma_device_id": "system", 00:28:59.933 "dma_device_type": 1 00:28:59.933 }, 00:28:59.933 { 00:28:59.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:59.933 "dma_device_type": 2 00:28:59.933 } 00:28:59.933 ], 00:28:59.933 "driver_specific": { 00:28:59.933 "passthru": { 00:28:59.933 "name": "pt4", 00:28:59.933 "base_bdev_name": "malloc4" 00:28:59.933 } 00:28:59.933 } 00:28:59.933 }' 00:28:59.933 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:59.933 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:00.190 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:00.190 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:00.190 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:00.191 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:00.191 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:00.191 11:40:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:00.191 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:00.191 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:00.191 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:00.448 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:00.449 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:00.449 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:29:00.707 [2024-07-25 11:40:16.356359] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:00.707 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b 00:29:00.708 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b ']' 00:29:00.708 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:00.965 [2024-07-25 11:40:16.592203] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:00.965 [2024-07-25 11:40:16.592253] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:00.965 [2024-07-25 11:40:16.592358] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:00.965 [2024-07-25 11:40:16.592480] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:00.965 [2024-07-25 11:40:16.592498] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:00.965 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.965 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:29:01.223 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:29:01.223 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:29:01.223 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:29:01.223 11:40:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:01.480 11:40:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:29:01.480 11:40:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:01.738 11:40:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:29:01.738 11:40:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:01.996 11:40:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:29:01.996 11:40:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:02.254 11:40:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:29:02.254 11:40:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:02.254 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:29:02.512 [2024-07-25 11:40:18.352623] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:02.512 [2024-07-25 11:40:18.355276] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:02.512 [2024-07-25 11:40:18.355532] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:29:02.512 [2024-07-25 11:40:18.355635] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:29:02.512 [2024-07-25 11:40:18.355803] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:02.512 [2024-07-25 11:40:18.356131] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:02.512 [2024-07-25 11:40:18.356402] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raidrequest: 00:29:02.512 { 00:29:02.512 "name": "raid_bdev1", 00:29:02.512 "raid_level": "raid5f", 00:29:02.512 "base_bdevs": [ 00:29:02.512 "malloc1", 00:29:02.512 "malloc2", 00:29:02.512 "malloc3", 00:29:02.512 "malloc4" 00:29:02.512 ], 00:29:02.512 "strip_size_kb": 64, 00:29:02.512 "superblock": false, 00:29:02.512 "method": "bdev_raid_create", 00:29:02.512 "req_id": 1 00:29:02.512 } 00:29:02.512 Got JSON-RPC error response 00:29:02.512 response: 00:29:02.512 { 00:29:02.512 "code": -17, 00:29:02.512 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:02.512 } 00:29:02.512 bdev found on bdev malloc3 00:29:02.512 [2024-07-25 11:40:18.356663] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:29:02.512 [2024-07-25 11:40:18.356702] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:02.512 [2024-07-25 11:40:18.356718] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:29:02.512 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:29:02.512 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:02.512 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:02.512 11:40:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:02.512 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.512 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:29:02.770 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:29:02.770 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:29:02.770 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:03.028 [2024-07-25 11:40:18.889147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:03.028 [2024-07-25 11:40:18.889230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:03.028 [2024-07-25 11:40:18.889264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:03.028 [2024-07-25 11:40:18.889281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:03.028 [2024-07-25 11:40:18.892027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:03.028 [2024-07-25 11:40:18.892072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:03.028 [2024-07-25 11:40:18.892194] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:03.028 [2024-07-25 11:40:18.892296] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:03.028 pt1 00:29:03.028 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:29:03.028 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:03.028 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:03.028 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:03.028 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:03.028 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:03.028 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.028 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.028 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.028 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.285 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.285 11:40:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.543 11:40:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.543 "name": "raid_bdev1", 00:29:03.543 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:03.543 "strip_size_kb": 64, 00:29:03.543 "state": "configuring", 00:29:03.543 "raid_level": "raid5f", 00:29:03.543 "superblock": true, 00:29:03.543 "num_base_bdevs": 4, 00:29:03.543 "num_base_bdevs_discovered": 1, 00:29:03.543 "num_base_bdevs_operational": 4, 00:29:03.543 "base_bdevs_list": [ 00:29:03.543 { 00:29:03.543 "name": "pt1", 00:29:03.543 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:03.543 "is_configured": true, 00:29:03.543 "data_offset": 2048, 00:29:03.543 "data_size": 63488 00:29:03.543 }, 00:29:03.543 { 00:29:03.543 "name": null, 00:29:03.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:03.543 "is_configured": false, 00:29:03.543 "data_offset": 2048, 00:29:03.543 "data_size": 63488 00:29:03.543 }, 00:29:03.543 { 00:29:03.543 "name": null, 00:29:03.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:03.543 "is_configured": false, 00:29:03.543 "data_offset": 2048, 00:29:03.543 "data_size": 63488 00:29:03.543 }, 00:29:03.543 { 00:29:03.543 "name": null, 00:29:03.543 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:03.543 "is_configured": false, 00:29:03.543 "data_offset": 2048, 00:29:03.543 "data_size": 63488 00:29:03.543 } 00:29:03.543 ] 00:29:03.543 }' 00:29:03.543 11:40:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.543 11:40:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.134 11:40:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:29:04.134 11:40:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:04.392 [2024-07-25 11:40:20.109483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:04.392 [2024-07-25 11:40:20.109603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.392 [2024-07-25 11:40:20.109677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:04.392 [2024-07-25 11:40:20.109698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.392 [2024-07-25 11:40:20.110352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.392 [2024-07-25 11:40:20.110396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:04.392 [2024-07-25 11:40:20.110509] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:04.392 [2024-07-25 11:40:20.110543] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:04.392 pt2 00:29:04.392 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@488 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:04.649 [2024-07-25 11:40:20.389656] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.649 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.907 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:04.907 "name": "raid_bdev1", 00:29:04.907 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:04.907 "strip_size_kb": 64, 00:29:04.907 "state": "configuring", 00:29:04.907 "raid_level": "raid5f", 00:29:04.907 "superblock": true, 00:29:04.907 "num_base_bdevs": 4, 00:29:04.907 "num_base_bdevs_discovered": 1, 00:29:04.907 "num_base_bdevs_operational": 4, 00:29:04.907 "base_bdevs_list": [ 00:29:04.907 { 00:29:04.907 "name": "pt1", 00:29:04.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:04.907 "is_configured": true, 00:29:04.907 "data_offset": 2048, 00:29:04.907 "data_size": 63488 00:29:04.907 }, 00:29:04.907 { 00:29:04.907 "name": null, 00:29:04.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:04.907 "is_configured": false, 00:29:04.907 "data_offset": 2048, 00:29:04.907 "data_size": 63488 00:29:04.907 }, 00:29:04.907 { 00:29:04.908 "name": null, 00:29:04.908 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:04.908 "is_configured": false, 00:29:04.908 "data_offset": 2048, 00:29:04.908 "data_size": 63488 00:29:04.908 }, 00:29:04.908 { 00:29:04.908 "name": null, 00:29:04.908 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:04.908 "is_configured": false, 00:29:04.908 "data_offset": 2048, 00:29:04.908 "data_size": 63488 00:29:04.908 } 00:29:04.908 ] 00:29:04.908 }' 00:29:04.908 11:40:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:04.908 11:40:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:05.842 11:40:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:29:05.842 11:40:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:29:05.842 11:40:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:05.842 [2024-07-25 11:40:21.641929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:05.842 [2024-07-25 11:40:21.642020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.842 [2024-07-25 11:40:21.642050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:05.842 [2024-07-25 11:40:21.642069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.842 [2024-07-25 11:40:21.642645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.842 [2024-07-25 11:40:21.642682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:05.842 [2024-07-25 11:40:21.642788] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:05.842 [2024-07-25 11:40:21.642831] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:05.842 pt2 00:29:05.842 11:40:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:29:05.842 11:40:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:29:05.842 11:40:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:06.100 [2024-07-25 11:40:21.869997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:06.100 [2024-07-25 11:40:21.870295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:06.100 [2024-07-25 11:40:21.870372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:06.100 [2024-07-25 11:40:21.870510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:06.100 [2024-07-25 11:40:21.871122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:06.100 [2024-07-25 11:40:21.871296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:06.100 [2024-07-25 11:40:21.871519] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:06.100 [2024-07-25 11:40:21.871688] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:06.100 pt3 00:29:06.100 11:40:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:29:06.100 11:40:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:29:06.100 11:40:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:06.358 [2024-07-25 11:40:22.150046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:06.358 [2024-07-25 11:40:22.150141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:06.358 [2024-07-25 11:40:22.150174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:06.358 [2024-07-25 11:40:22.150193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:06.358 [2024-07-25 11:40:22.150813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:06.358 [2024-07-25 11:40:22.150859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:06.358 [2024-07-25 11:40:22.150965] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:06.358 [2024-07-25 11:40:22.151015] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:06.358 [2024-07-25 11:40:22.151213] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:06.358 [2024-07-25 11:40:22.151242] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:06.358 [2024-07-25 11:40:22.151547] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:06.358 pt4 00:29:06.358 [2024-07-25 11:40:22.157873] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:06.358 [2024-07-25 11:40:22.157898] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:06.358 [2024-07-25 11:40:22.158139] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.358 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:29:06.358 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:29:06.358 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:06.358 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:06.358 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:06.358 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:06.359 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:06.359 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:06.359 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:06.359 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:06.359 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:06.359 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:06.359 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.359 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.617 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:06.617 "name": "raid_bdev1", 00:29:06.617 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:06.617 "strip_size_kb": 64, 00:29:06.617 "state": "online", 00:29:06.617 "raid_level": "raid5f", 00:29:06.617 "superblock": true, 00:29:06.617 "num_base_bdevs": 4, 00:29:06.617 "num_base_bdevs_discovered": 4, 00:29:06.617 "num_base_bdevs_operational": 4, 00:29:06.617 "base_bdevs_list": [ 00:29:06.617 { 00:29:06.617 "name": "pt1", 00:29:06.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:06.617 "is_configured": true, 00:29:06.617 "data_offset": 2048, 00:29:06.617 "data_size": 63488 00:29:06.617 }, 00:29:06.617 { 00:29:06.617 "name": "pt2", 00:29:06.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:06.617 "is_configured": true, 00:29:06.617 "data_offset": 2048, 00:29:06.617 "data_size": 63488 00:29:06.617 }, 00:29:06.617 { 00:29:06.617 "name": "pt3", 00:29:06.617 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:06.617 "is_configured": true, 00:29:06.617 "data_offset": 2048, 00:29:06.617 "data_size": 63488 00:29:06.617 }, 00:29:06.617 { 00:29:06.617 "name": "pt4", 00:29:06.617 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:06.617 "is_configured": true, 00:29:06.617 "data_offset": 2048, 00:29:06.617 "data_size": 63488 00:29:06.617 } 00:29:06.617 ] 00:29:06.617 }' 00:29:06.617 11:40:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:06.617 11:40:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.631 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:29:07.631 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:29:07.631 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:29:07.631 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:29:07.631 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:29:07.631 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:29:07.631 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:07.631 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:29:07.631 [2024-07-25 11:40:23.334140] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:07.631 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:29:07.631 "name": "raid_bdev1", 00:29:07.631 "aliases": [ 00:29:07.631 "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b" 00:29:07.631 ], 00:29:07.631 "product_name": "Raid Volume", 00:29:07.631 "block_size": 512, 00:29:07.631 "num_blocks": 190464, 00:29:07.631 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:07.631 "assigned_rate_limits": { 00:29:07.631 "rw_ios_per_sec": 0, 00:29:07.631 "rw_mbytes_per_sec": 0, 00:29:07.631 "r_mbytes_per_sec": 0, 00:29:07.631 "w_mbytes_per_sec": 0 00:29:07.631 }, 00:29:07.631 "claimed": false, 00:29:07.631 "zoned": false, 00:29:07.631 "supported_io_types": { 00:29:07.631 "read": true, 00:29:07.631 "write": true, 00:29:07.631 "unmap": false, 00:29:07.631 "flush": false, 00:29:07.631 "reset": true, 00:29:07.631 "nvme_admin": false, 00:29:07.631 "nvme_io": false, 00:29:07.631 "nvme_io_md": false, 00:29:07.631 "write_zeroes": true, 00:29:07.631 "zcopy": false, 00:29:07.631 "get_zone_info": false, 00:29:07.631 "zone_management": false, 00:29:07.631 "zone_append": false, 00:29:07.631 "compare": false, 00:29:07.631 "compare_and_write": false, 00:29:07.631 "abort": false, 00:29:07.631 "seek_hole": false, 00:29:07.631 "seek_data": false, 00:29:07.631 "copy": false, 00:29:07.631 "nvme_iov_md": false 00:29:07.631 }, 00:29:07.631 "driver_specific": { 00:29:07.631 "raid": { 00:29:07.631 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:07.631 "strip_size_kb": 64, 00:29:07.631 "state": "online", 00:29:07.631 "raid_level": "raid5f", 00:29:07.631 "superblock": true, 00:29:07.631 "num_base_bdevs": 4, 00:29:07.631 "num_base_bdevs_discovered": 4, 00:29:07.631 "num_base_bdevs_operational": 4, 00:29:07.631 "base_bdevs_list": [ 00:29:07.631 { 00:29:07.631 "name": "pt1", 00:29:07.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:07.631 "is_configured": true, 00:29:07.631 "data_offset": 2048, 00:29:07.631 "data_size": 63488 00:29:07.631 }, 00:29:07.631 { 00:29:07.631 "name": "pt2", 00:29:07.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:07.631 "is_configured": true, 00:29:07.631 "data_offset": 2048, 00:29:07.631 "data_size": 63488 00:29:07.631 }, 00:29:07.631 { 00:29:07.631 "name": "pt3", 00:29:07.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:07.631 "is_configured": true, 00:29:07.631 "data_offset": 2048, 00:29:07.631 "data_size": 63488 00:29:07.631 }, 00:29:07.631 { 00:29:07.632 "name": "pt4", 00:29:07.632 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:07.632 "is_configured": true, 00:29:07.632 "data_offset": 2048, 00:29:07.632 "data_size": 63488 00:29:07.632 } 00:29:07.632 ] 00:29:07.632 } 00:29:07.632 } 00:29:07.632 }' 00:29:07.632 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:07.632 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:29:07.632 pt2 00:29:07.632 pt3 00:29:07.632 pt4' 00:29:07.632 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:07.632 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:07.632 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:29:07.902 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:07.902 "name": "pt1", 00:29:07.902 "aliases": [ 00:29:07.902 "00000000-0000-0000-0000-000000000001" 00:29:07.902 ], 00:29:07.902 "product_name": "passthru", 00:29:07.902 "block_size": 512, 00:29:07.902 "num_blocks": 65536, 00:29:07.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:07.902 "assigned_rate_limits": { 00:29:07.902 "rw_ios_per_sec": 0, 00:29:07.902 "rw_mbytes_per_sec": 0, 00:29:07.902 "r_mbytes_per_sec": 0, 00:29:07.902 "w_mbytes_per_sec": 0 00:29:07.902 }, 00:29:07.902 "claimed": true, 00:29:07.902 "claim_type": "exclusive_write", 00:29:07.902 "zoned": false, 00:29:07.902 "supported_io_types": { 00:29:07.902 "read": true, 00:29:07.902 "write": true, 00:29:07.902 "unmap": true, 00:29:07.902 "flush": true, 00:29:07.902 "reset": true, 00:29:07.902 "nvme_admin": false, 00:29:07.902 "nvme_io": false, 00:29:07.902 "nvme_io_md": false, 00:29:07.902 "write_zeroes": true, 00:29:07.902 "zcopy": true, 00:29:07.902 "get_zone_info": false, 00:29:07.902 "zone_management": false, 00:29:07.902 "zone_append": false, 00:29:07.902 "compare": false, 00:29:07.902 "compare_and_write": false, 00:29:07.902 "abort": true, 00:29:07.902 "seek_hole": false, 00:29:07.902 "seek_data": false, 00:29:07.902 "copy": true, 00:29:07.902 "nvme_iov_md": false 00:29:07.902 }, 00:29:07.902 "memory_domains": [ 00:29:07.902 { 00:29:07.902 "dma_device_id": "system", 00:29:07.902 "dma_device_type": 1 00:29:07.902 }, 00:29:07.902 { 00:29:07.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:07.902 "dma_device_type": 2 00:29:07.902 } 00:29:07.902 ], 00:29:07.902 "driver_specific": { 00:29:07.902 "passthru": { 00:29:07.902 "name": "pt1", 00:29:07.902 "base_bdev_name": "malloc1" 00:29:07.902 } 00:29:07.902 } 00:29:07.902 }' 00:29:07.902 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:07.902 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.160 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:08.160 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.160 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.160 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:08.160 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.160 11:40:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.160 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:08.160 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.418 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.418 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:08.418 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:08.418 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:29:08.418 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:08.676 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:08.676 "name": "pt2", 00:29:08.676 "aliases": [ 00:29:08.676 "00000000-0000-0000-0000-000000000002" 00:29:08.676 ], 00:29:08.676 "product_name": "passthru", 00:29:08.676 "block_size": 512, 00:29:08.676 "num_blocks": 65536, 00:29:08.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:08.676 "assigned_rate_limits": { 00:29:08.676 "rw_ios_per_sec": 0, 00:29:08.676 "rw_mbytes_per_sec": 0, 00:29:08.676 "r_mbytes_per_sec": 0, 00:29:08.676 "w_mbytes_per_sec": 0 00:29:08.676 }, 00:29:08.676 "claimed": true, 00:29:08.676 "claim_type": "exclusive_write", 00:29:08.676 "zoned": false, 00:29:08.676 "supported_io_types": { 00:29:08.676 "read": true, 00:29:08.676 "write": true, 00:29:08.676 "unmap": true, 00:29:08.676 "flush": true, 00:29:08.676 "reset": true, 00:29:08.676 "nvme_admin": false, 00:29:08.676 "nvme_io": false, 00:29:08.676 "nvme_io_md": false, 00:29:08.676 "write_zeroes": true, 00:29:08.676 "zcopy": true, 00:29:08.676 "get_zone_info": false, 00:29:08.676 "zone_management": false, 00:29:08.676 "zone_append": false, 00:29:08.676 "compare": false, 00:29:08.676 "compare_and_write": false, 00:29:08.676 "abort": true, 00:29:08.676 "seek_hole": false, 00:29:08.676 "seek_data": false, 00:29:08.676 "copy": true, 00:29:08.676 "nvme_iov_md": false 00:29:08.676 }, 00:29:08.676 "memory_domains": [ 00:29:08.676 { 00:29:08.676 "dma_device_id": "system", 00:29:08.676 "dma_device_type": 1 00:29:08.676 }, 00:29:08.676 { 00:29:08.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:08.676 "dma_device_type": 2 00:29:08.676 } 00:29:08.676 ], 00:29:08.676 "driver_specific": { 00:29:08.676 "passthru": { 00:29:08.676 "name": "pt2", 00:29:08.676 "base_bdev_name": "malloc2" 00:29:08.676 } 00:29:08.676 } 00:29:08.676 }' 00:29:08.676 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.677 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:08.677 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:08.677 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.677 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:08.677 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:08.677 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.935 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:08.935 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:08.935 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.935 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:08.935 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:08.935 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:08.935 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:29:08.935 11:40:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:09.214 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:09.214 "name": "pt3", 00:29:09.214 "aliases": [ 00:29:09.214 "00000000-0000-0000-0000-000000000003" 00:29:09.214 ], 00:29:09.214 "product_name": "passthru", 00:29:09.214 "block_size": 512, 00:29:09.214 "num_blocks": 65536, 00:29:09.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:09.214 "assigned_rate_limits": { 00:29:09.214 "rw_ios_per_sec": 0, 00:29:09.214 "rw_mbytes_per_sec": 0, 00:29:09.214 "r_mbytes_per_sec": 0, 00:29:09.214 "w_mbytes_per_sec": 0 00:29:09.214 }, 00:29:09.214 "claimed": true, 00:29:09.214 "claim_type": "exclusive_write", 00:29:09.214 "zoned": false, 00:29:09.214 "supported_io_types": { 00:29:09.214 "read": true, 00:29:09.214 "write": true, 00:29:09.214 "unmap": true, 00:29:09.214 "flush": true, 00:29:09.214 "reset": true, 00:29:09.214 "nvme_admin": false, 00:29:09.214 "nvme_io": false, 00:29:09.214 "nvme_io_md": false, 00:29:09.214 "write_zeroes": true, 00:29:09.214 "zcopy": true, 00:29:09.214 "get_zone_info": false, 00:29:09.214 "zone_management": false, 00:29:09.214 "zone_append": false, 00:29:09.214 "compare": false, 00:29:09.214 "compare_and_write": false, 00:29:09.214 "abort": true, 00:29:09.214 "seek_hole": false, 00:29:09.214 "seek_data": false, 00:29:09.214 "copy": true, 00:29:09.214 "nvme_iov_md": false 00:29:09.214 }, 00:29:09.214 "memory_domains": [ 00:29:09.214 { 00:29:09.214 "dma_device_id": "system", 00:29:09.214 "dma_device_type": 1 00:29:09.214 }, 00:29:09.214 { 00:29:09.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:09.214 "dma_device_type": 2 00:29:09.214 } 00:29:09.214 ], 00:29:09.214 "driver_specific": { 00:29:09.214 "passthru": { 00:29:09.214 "name": "pt3", 00:29:09.214 "base_bdev_name": "malloc3" 00:29:09.214 } 00:29:09.214 } 00:29:09.214 }' 00:29:09.214 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:09.214 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:09.472 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:09.472 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:09.472 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:09.472 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:09.472 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:09.472 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:09.472 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:09.472 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:09.730 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:09.730 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:09.730 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:29:09.730 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:29:09.730 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:29:09.989 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:29:09.989 "name": "pt4", 00:29:09.989 "aliases": [ 00:29:09.989 "00000000-0000-0000-0000-000000000004" 00:29:09.989 ], 00:29:09.989 "product_name": "passthru", 00:29:09.989 "block_size": 512, 00:29:09.989 "num_blocks": 65536, 00:29:09.989 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:09.989 "assigned_rate_limits": { 00:29:09.989 "rw_ios_per_sec": 0, 00:29:09.989 "rw_mbytes_per_sec": 0, 00:29:09.989 "r_mbytes_per_sec": 0, 00:29:09.989 "w_mbytes_per_sec": 0 00:29:09.989 }, 00:29:09.989 "claimed": true, 00:29:09.989 "claim_type": "exclusive_write", 00:29:09.989 "zoned": false, 00:29:09.989 "supported_io_types": { 00:29:09.989 "read": true, 00:29:09.989 "write": true, 00:29:09.989 "unmap": true, 00:29:09.989 "flush": true, 00:29:09.989 "reset": true, 00:29:09.989 "nvme_admin": false, 00:29:09.989 "nvme_io": false, 00:29:09.989 "nvme_io_md": false, 00:29:09.989 "write_zeroes": true, 00:29:09.989 "zcopy": true, 00:29:09.989 "get_zone_info": false, 00:29:09.989 "zone_management": false, 00:29:09.989 "zone_append": false, 00:29:09.989 "compare": false, 00:29:09.989 "compare_and_write": false, 00:29:09.989 "abort": true, 00:29:09.989 "seek_hole": false, 00:29:09.989 "seek_data": false, 00:29:09.989 "copy": true, 00:29:09.989 "nvme_iov_md": false 00:29:09.989 }, 00:29:09.989 "memory_domains": [ 00:29:09.989 { 00:29:09.989 "dma_device_id": "system", 00:29:09.989 "dma_device_type": 1 00:29:09.989 }, 00:29:09.989 { 00:29:09.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:09.989 "dma_device_type": 2 00:29:09.989 } 00:29:09.989 ], 00:29:09.989 "driver_specific": { 00:29:09.989 "passthru": { 00:29:09.989 "name": "pt4", 00:29:09.989 "base_bdev_name": "malloc4" 00:29:09.989 } 00:29:09.989 } 00:29:09.989 }' 00:29:09.989 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:09.989 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:29:09.989 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:29:09.989 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:09.989 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:29:10.247 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:29:10.247 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:10.247 11:40:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:29:10.247 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:29:10.247 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:10.247 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:29:10.247 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:29:10.247 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:10.247 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:29:10.814 [2024-07-25 11:40:26.390968] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b '!=' 6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b ']' 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid5f 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:29:10.814 [2024-07-25 11:40:26.654836] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.814 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:11.381 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:11.381 "name": "raid_bdev1", 00:29:11.381 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:11.381 "strip_size_kb": 64, 00:29:11.381 "state": "online", 00:29:11.381 "raid_level": "raid5f", 00:29:11.381 "superblock": true, 00:29:11.381 "num_base_bdevs": 4, 00:29:11.381 "num_base_bdevs_discovered": 3, 00:29:11.381 "num_base_bdevs_operational": 3, 00:29:11.381 "base_bdevs_list": [ 00:29:11.381 { 00:29:11.381 "name": null, 00:29:11.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.381 "is_configured": false, 00:29:11.381 "data_offset": 2048, 00:29:11.381 "data_size": 63488 00:29:11.381 }, 00:29:11.381 { 00:29:11.381 "name": "pt2", 00:29:11.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:11.381 "is_configured": true, 00:29:11.381 "data_offset": 2048, 00:29:11.381 "data_size": 63488 00:29:11.381 }, 00:29:11.381 { 00:29:11.381 "name": "pt3", 00:29:11.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:11.381 "is_configured": true, 00:29:11.381 "data_offset": 2048, 00:29:11.381 "data_size": 63488 00:29:11.381 }, 00:29:11.381 { 00:29:11.381 "name": "pt4", 00:29:11.381 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:11.381 "is_configured": true, 00:29:11.381 "data_offset": 2048, 00:29:11.381 "data_size": 63488 00:29:11.381 } 00:29:11.381 ] 00:29:11.381 }' 00:29:11.381 11:40:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:11.381 11:40:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.947 11:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:12.205 [2024-07-25 11:40:27.899137] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:12.205 [2024-07-25 11:40:27.899185] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:12.205 [2024-07-25 11:40:27.899376] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:12.205 [2024-07-25 11:40:27.899518] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:12.205 [2024-07-25 11:40:27.899559] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:12.205 11:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.205 11:40:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:29:12.462 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:29:12.462 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:29:12.462 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:29:12.462 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:12.462 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:29:12.719 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:12.719 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:12.719 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:29:12.977 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:12.977 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:12.977 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:13.235 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:13.235 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:29:13.235 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:29:13.235 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:29:13.235 11:40:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:13.492 [2024-07-25 11:40:29.195364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:13.492 [2024-07-25 11:40:29.195482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.492 [2024-07-25 11:40:29.195513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:13.492 [2024-07-25 11:40:29.195532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.492 [2024-07-25 11:40:29.198414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.492 [2024-07-25 11:40:29.198503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:13.492 [2024-07-25 11:40:29.198646] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:13.492 [2024-07-25 11:40:29.198725] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:13.492 pt2 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:13.492 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.750 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:13.750 "name": "raid_bdev1", 00:29:13.750 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:13.750 "strip_size_kb": 64, 00:29:13.750 "state": "configuring", 00:29:13.750 "raid_level": "raid5f", 00:29:13.750 "superblock": true, 00:29:13.750 "num_base_bdevs": 4, 00:29:13.750 "num_base_bdevs_discovered": 1, 00:29:13.750 "num_base_bdevs_operational": 3, 00:29:13.750 "base_bdevs_list": [ 00:29:13.750 { 00:29:13.750 "name": null, 00:29:13.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.750 "is_configured": false, 00:29:13.750 "data_offset": 2048, 00:29:13.750 "data_size": 63488 00:29:13.750 }, 00:29:13.750 { 00:29:13.750 "name": "pt2", 00:29:13.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:13.750 "is_configured": true, 00:29:13.750 "data_offset": 2048, 00:29:13.750 "data_size": 63488 00:29:13.750 }, 00:29:13.750 { 00:29:13.750 "name": null, 00:29:13.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:13.750 "is_configured": false, 00:29:13.750 "data_offset": 2048, 00:29:13.750 "data_size": 63488 00:29:13.750 }, 00:29:13.750 { 00:29:13.750 "name": null, 00:29:13.750 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:13.750 "is_configured": false, 00:29:13.750 "data_offset": 2048, 00:29:13.750 "data_size": 63488 00:29:13.750 } 00:29:13.750 ] 00:29:13.750 }' 00:29:13.750 11:40:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:13.750 11:40:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.314 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:29:14.314 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:29:14.315 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:14.576 [2024-07-25 11:40:30.443753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:14.576 [2024-07-25 11:40:30.443878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.576 [2024-07-25 11:40:30.443907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:29:14.576 [2024-07-25 11:40:30.443925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.576 [2024-07-25 11:40:30.444576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.576 [2024-07-25 11:40:30.444639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:14.576 [2024-07-25 11:40:30.444752] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:14.576 [2024-07-25 11:40:30.444795] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:14.576 pt3 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.838 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.097 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:15.097 "name": "raid_bdev1", 00:29:15.097 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:15.097 "strip_size_kb": 64, 00:29:15.097 "state": "configuring", 00:29:15.097 "raid_level": "raid5f", 00:29:15.097 "superblock": true, 00:29:15.097 "num_base_bdevs": 4, 00:29:15.097 "num_base_bdevs_discovered": 2, 00:29:15.097 "num_base_bdevs_operational": 3, 00:29:15.097 "base_bdevs_list": [ 00:29:15.097 { 00:29:15.097 "name": null, 00:29:15.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.097 "is_configured": false, 00:29:15.097 "data_offset": 2048, 00:29:15.097 "data_size": 63488 00:29:15.097 }, 00:29:15.097 { 00:29:15.097 "name": "pt2", 00:29:15.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:15.097 "is_configured": true, 00:29:15.097 "data_offset": 2048, 00:29:15.097 "data_size": 63488 00:29:15.097 }, 00:29:15.097 { 00:29:15.097 "name": "pt3", 00:29:15.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:15.097 "is_configured": true, 00:29:15.097 "data_offset": 2048, 00:29:15.097 "data_size": 63488 00:29:15.097 }, 00:29:15.097 { 00:29:15.097 "name": null, 00:29:15.097 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:15.097 "is_configured": false, 00:29:15.097 "data_offset": 2048, 00:29:15.097 "data_size": 63488 00:29:15.097 } 00:29:15.097 ] 00:29:15.097 }' 00:29:15.097 11:40:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:15.097 11:40:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.663 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:29:15.663 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:29:15.663 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:29:15.663 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:15.920 [2024-07-25 11:40:31.588080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:15.920 [2024-07-25 11:40:31.588193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.920 [2024-07-25 11:40:31.588228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:29:15.920 [2024-07-25 11:40:31.588248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.920 [2024-07-25 11:40:31.588841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.920 [2024-07-25 11:40:31.588883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:15.920 [2024-07-25 11:40:31.588987] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:15.920 [2024-07-25 11:40:31.589035] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:15.920 [2024-07-25 11:40:31.589237] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:15.920 [2024-07-25 11:40:31.589266] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:15.920 [2024-07-25 11:40:31.589621] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:15.920 [2024-07-25 11:40:31.596570] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:15.920 [2024-07-25 11:40:31.596597] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:29:15.920 [2024-07-25 11:40:31.596998] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:15.920 pt4 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.920 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.178 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:16.178 "name": "raid_bdev1", 00:29:16.178 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:16.178 "strip_size_kb": 64, 00:29:16.178 "state": "online", 00:29:16.178 "raid_level": "raid5f", 00:29:16.178 "superblock": true, 00:29:16.178 "num_base_bdevs": 4, 00:29:16.178 "num_base_bdevs_discovered": 3, 00:29:16.178 "num_base_bdevs_operational": 3, 00:29:16.178 "base_bdevs_list": [ 00:29:16.178 { 00:29:16.178 "name": null, 00:29:16.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.178 "is_configured": false, 00:29:16.178 "data_offset": 2048, 00:29:16.178 "data_size": 63488 00:29:16.178 }, 00:29:16.178 { 00:29:16.178 "name": "pt2", 00:29:16.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:16.178 "is_configured": true, 00:29:16.178 "data_offset": 2048, 00:29:16.178 "data_size": 63488 00:29:16.178 }, 00:29:16.178 { 00:29:16.178 "name": "pt3", 00:29:16.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:16.179 "is_configured": true, 00:29:16.179 "data_offset": 2048, 00:29:16.179 "data_size": 63488 00:29:16.179 }, 00:29:16.179 { 00:29:16.179 "name": "pt4", 00:29:16.179 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:16.179 "is_configured": true, 00:29:16.179 "data_offset": 2048, 00:29:16.179 "data_size": 63488 00:29:16.179 } 00:29:16.179 ] 00:29:16.179 }' 00:29:16.179 11:40:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:16.179 11:40:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.744 11:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:17.002 [2024-07-25 11:40:32.828759] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:17.002 [2024-07-25 11:40:32.828812] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:17.002 [2024-07-25 11:40:32.828919] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:17.002 [2024-07-25 11:40:32.829026] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:17.002 [2024-07-25 11:40:32.829043] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:29:17.002 11:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.002 11:40:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:29:17.260 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:29:17.260 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:29:17.260 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:29:17.260 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:29:17.260 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:29:17.518 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:17.794 [2024-07-25 11:40:33.564892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:17.794 [2024-07-25 11:40:33.564988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.794 [2024-07-25 11:40:33.565024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:29:17.794 [2024-07-25 11:40:33.565041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.794 [2024-07-25 11:40:33.567897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.794 [2024-07-25 11:40:33.567951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:17.794 [2024-07-25 11:40:33.568074] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:17.794 [2024-07-25 11:40:33.568137] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:17.794 [2024-07-25 11:40:33.568334] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:17.794 [2024-07-25 11:40:33.568362] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:17.794 [2024-07-25 11:40:33.568391] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:29:17.794 [2024-07-25 11:40:33.568455] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:17.794 [2024-07-25 11:40:33.568653] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:17.794 pt1 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.794 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.051 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:18.051 "name": "raid_bdev1", 00:29:18.051 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:18.051 "strip_size_kb": 64, 00:29:18.051 "state": "configuring", 00:29:18.051 "raid_level": "raid5f", 00:29:18.051 "superblock": true, 00:29:18.051 "num_base_bdevs": 4, 00:29:18.051 "num_base_bdevs_discovered": 2, 00:29:18.051 "num_base_bdevs_operational": 3, 00:29:18.051 "base_bdevs_list": [ 00:29:18.051 { 00:29:18.051 "name": null, 00:29:18.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.051 "is_configured": false, 00:29:18.051 "data_offset": 2048, 00:29:18.051 "data_size": 63488 00:29:18.051 }, 00:29:18.051 { 00:29:18.051 "name": "pt2", 00:29:18.051 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:18.051 "is_configured": true, 00:29:18.051 "data_offset": 2048, 00:29:18.051 "data_size": 63488 00:29:18.051 }, 00:29:18.051 { 00:29:18.051 "name": "pt3", 00:29:18.051 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:18.051 "is_configured": true, 00:29:18.051 "data_offset": 2048, 00:29:18.051 "data_size": 63488 00:29:18.051 }, 00:29:18.051 { 00:29:18.051 "name": null, 00:29:18.051 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:18.051 "is_configured": false, 00:29:18.051 "data_offset": 2048, 00:29:18.051 "data_size": 63488 00:29:18.051 } 00:29:18.051 ] 00:29:18.051 }' 00:29:18.051 11:40:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:18.051 11:40:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.986 11:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:29:18.986 11:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:18.986 11:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:29:18.986 11:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:19.245 [2024-07-25 11:40:34.977307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:19.245 [2024-07-25 11:40:34.977447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:19.245 [2024-07-25 11:40:34.977482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:29:19.245 [2024-07-25 11:40:34.977501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:19.245 [2024-07-25 11:40:34.978106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:19.245 [2024-07-25 11:40:34.978153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:19.245 [2024-07-25 11:40:34.978258] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:19.245 [2024-07-25 11:40:34.978305] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:19.245 [2024-07-25 11:40:34.978511] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:29:19.245 [2024-07-25 11:40:34.978534] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:19.245 [2024-07-25 11:40:34.978891] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:19.245 [2024-07-25 11:40:34.985083] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:29:19.245 [2024-07-25 11:40:34.985107] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:29:19.245 [2024-07-25 11:40:34.985450] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:19.245 pt4 00:29:19.245 11:40:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.245 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.502 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:19.502 "name": "raid_bdev1", 00:29:19.502 "uuid": "6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b", 00:29:19.502 "strip_size_kb": 64, 00:29:19.502 "state": "online", 00:29:19.502 "raid_level": "raid5f", 00:29:19.502 "superblock": true, 00:29:19.502 "num_base_bdevs": 4, 00:29:19.502 "num_base_bdevs_discovered": 3, 00:29:19.502 "num_base_bdevs_operational": 3, 00:29:19.502 "base_bdevs_list": [ 00:29:19.502 { 00:29:19.502 "name": null, 00:29:19.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.502 "is_configured": false, 00:29:19.502 "data_offset": 2048, 00:29:19.502 "data_size": 63488 00:29:19.502 }, 00:29:19.502 { 00:29:19.502 "name": "pt2", 00:29:19.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:19.502 "is_configured": true, 00:29:19.502 "data_offset": 2048, 00:29:19.502 "data_size": 63488 00:29:19.502 }, 00:29:19.502 { 00:29:19.502 "name": "pt3", 00:29:19.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:19.502 "is_configured": true, 00:29:19.502 "data_offset": 2048, 00:29:19.502 "data_size": 63488 00:29:19.502 }, 00:29:19.502 { 00:29:19.502 "name": "pt4", 00:29:19.502 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:19.502 "is_configured": true, 00:29:19.502 "data_offset": 2048, 00:29:19.502 "data_size": 63488 00:29:19.502 } 00:29:19.502 ] 00:29:19.502 }' 00:29:19.502 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:19.502 11:40:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.068 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:29:20.068 11:40:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:29:20.635 [2024-07-25 11:40:36.433495] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b '!=' 6c50cd08-45a9-4d9a-b2c9-3e51f32e6c9b ']' 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 97585 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 97585 ']' 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 97585 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97585 00:29:20.635 killing process with pid 97585 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97585' 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 97585 00:29:20.635 11:40:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 97585 00:29:20.635 [2024-07-25 11:40:36.481834] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:20.635 [2024-07-25 11:40:36.481963] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:20.635 [2024-07-25 11:40:36.482075] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:20.635 [2024-07-25 11:40:36.482093] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:29:21.201 [2024-07-25 11:40:36.839451] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:22.135 ************************************ 00:29:22.135 END TEST raid5f_superblock_test 00:29:22.135 ************************************ 00:29:22.135 11:40:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:29:22.135 00:29:22.135 real 0m29.523s 00:29:22.135 user 0m54.035s 00:29:22.135 sys 0m3.823s 00:29:22.135 11:40:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:22.135 11:40:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.393 11:40:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # '[' true = true ']' 00:29:22.393 11:40:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:29:22.393 11:40:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:22.393 11:40:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:22.393 11:40:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:22.393 ************************************ 00:29:22.393 START TEST raid5f_rebuild_test 00:29:22.393 ************************************ 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:22.393 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=98424 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 98424 /var/tmp/spdk-raid.sock 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 98424 ']' 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:22.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:22.394 11:40:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.394 [2024-07-25 11:40:38.183641] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:22.394 [2024-07-25 11:40:38.184120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98424 ] 00:29:22.394 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:22.394 Zero copy mechanism will not be used. 00:29:22.652 [2024-07-25 11:40:38.359500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.911 [2024-07-25 11:40:38.590097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.911 [2024-07-25 11:40:38.781680] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:22.911 [2024-07-25 11:40:38.781721] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:23.476 11:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:23.477 11:40:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:29:23.477 11:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:23.477 11:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:23.734 BaseBdev1_malloc 00:29:23.735 11:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:23.993 [2024-07-25 11:40:39.633445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:23.993 [2024-07-25 11:40:39.633576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:23.993 [2024-07-25 11:40:39.633621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:23.993 [2024-07-25 11:40:39.633638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:23.993 [2024-07-25 11:40:39.636492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:23.993 [2024-07-25 11:40:39.636566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:23.993 BaseBdev1 00:29:23.993 11:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:23.993 11:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:24.251 BaseBdev2_malloc 00:29:24.251 11:40:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:24.510 [2024-07-25 11:40:40.144004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:24.510 [2024-07-25 11:40:40.144111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:24.510 [2024-07-25 11:40:40.144152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:24.510 [2024-07-25 11:40:40.144168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:24.511 [2024-07-25 11:40:40.146841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:24.511 [2024-07-25 11:40:40.146885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:24.511 BaseBdev2 00:29:24.511 11:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:24.511 11:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:24.769 BaseBdev3_malloc 00:29:24.769 11:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:24.769 [2024-07-25 11:40:40.626944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:24.769 [2024-07-25 11:40:40.627044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:24.769 [2024-07-25 11:40:40.627086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:24.769 [2024-07-25 11:40:40.627102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:24.769 [2024-07-25 11:40:40.629805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:24.769 [2024-07-25 11:40:40.629849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:24.769 BaseBdev3 00:29:24.769 11:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:24.769 11:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:25.028 BaseBdev4_malloc 00:29:25.028 11:40:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:25.286 [2024-07-25 11:40:41.117269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:25.286 [2024-07-25 11:40:41.117378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:25.286 [2024-07-25 11:40:41.117419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:29:25.286 [2024-07-25 11:40:41.117436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:25.286 [2024-07-25 11:40:41.120894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:25.286 [2024-07-25 11:40:41.120958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:25.286 BaseBdev4 00:29:25.286 11:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:25.544 spare_malloc 00:29:25.544 11:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:25.802 spare_delay 00:29:25.802 11:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:26.060 [2024-07-25 11:40:41.847403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:26.060 [2024-07-25 11:40:41.847485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:26.060 [2024-07-25 11:40:41.847527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:26.060 [2024-07-25 11:40:41.847544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:26.060 [2024-07-25 11:40:41.850533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:26.060 [2024-07-25 11:40:41.850581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:26.060 spare 00:29:26.060 11:40:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:26.318 [2024-07-25 11:40:42.079601] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:26.318 [2024-07-25 11:40:42.082932] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:26.318 [2024-07-25 11:40:42.083092] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:26.318 [2024-07-25 11:40:42.083173] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:26.318 [2024-07-25 11:40:42.083357] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:26.318 [2024-07-25 11:40:42.083378] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:29:26.318 [2024-07-25 11:40:42.083812] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:26.318 [2024-07-25 11:40:42.091035] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:26.318 [2024-07-25 11:40:42.091068] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:26.318 [2024-07-25 11:40:42.091411] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.318 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.577 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:26.577 "name": "raid_bdev1", 00:29:26.577 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:26.577 "strip_size_kb": 64, 00:29:26.577 "state": "online", 00:29:26.577 "raid_level": "raid5f", 00:29:26.577 "superblock": false, 00:29:26.577 "num_base_bdevs": 4, 00:29:26.577 "num_base_bdevs_discovered": 4, 00:29:26.577 "num_base_bdevs_operational": 4, 00:29:26.577 "base_bdevs_list": [ 00:29:26.577 { 00:29:26.577 "name": "BaseBdev1", 00:29:26.577 "uuid": "f359fa55-7974-5409-a751-07f3c7531e34", 00:29:26.577 "is_configured": true, 00:29:26.577 "data_offset": 0, 00:29:26.577 "data_size": 65536 00:29:26.577 }, 00:29:26.577 { 00:29:26.577 "name": "BaseBdev2", 00:29:26.577 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:26.577 "is_configured": true, 00:29:26.577 "data_offset": 0, 00:29:26.577 "data_size": 65536 00:29:26.577 }, 00:29:26.577 { 00:29:26.577 "name": "BaseBdev3", 00:29:26.577 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:26.577 "is_configured": true, 00:29:26.577 "data_offset": 0, 00:29:26.577 "data_size": 65536 00:29:26.577 }, 00:29:26.577 { 00:29:26.577 "name": "BaseBdev4", 00:29:26.577 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:26.577 "is_configured": true, 00:29:26.577 "data_offset": 0, 00:29:26.577 "data_size": 65536 00:29:26.577 } 00:29:26.577 ] 00:29:26.577 }' 00:29:26.577 11:40:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:26.577 11:40:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.515 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:27.515 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:27.515 [2024-07-25 11:40:43.291716] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:27.515 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=196608 00:29:27.515 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.515 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:27.773 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:28.031 [2024-07-25 11:40:43.827712] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:28.031 /dev/nbd0 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:28.031 1+0 records in 00:29:28.031 1+0 records out 00:29:28.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253064 s, 16.2 MB/s 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # write_unit_size=384 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # echo 192 00:29:28.031 11:40:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:29:28.965 512+0 records in 00:29:28.965 512+0 records out 00:29:28.965 100663296 bytes (101 MB, 96 MiB) copied, 0.647076 s, 156 MB/s 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:28.965 [2024-07-25 11:40:44.794952] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:28.965 11:40:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:29.223 [2024-07-25 11:40:45.098556] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.482 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.741 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:29.741 "name": "raid_bdev1", 00:29:29.741 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:29.741 "strip_size_kb": 64, 00:29:29.741 "state": "online", 00:29:29.741 "raid_level": "raid5f", 00:29:29.741 "superblock": false, 00:29:29.741 "num_base_bdevs": 4, 00:29:29.741 "num_base_bdevs_discovered": 3, 00:29:29.741 "num_base_bdevs_operational": 3, 00:29:29.741 "base_bdevs_list": [ 00:29:29.741 { 00:29:29.741 "name": null, 00:29:29.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:29.741 "is_configured": false, 00:29:29.741 "data_offset": 0, 00:29:29.741 "data_size": 65536 00:29:29.741 }, 00:29:29.741 { 00:29:29.741 "name": "BaseBdev2", 00:29:29.741 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:29.741 "is_configured": true, 00:29:29.741 "data_offset": 0, 00:29:29.741 "data_size": 65536 00:29:29.741 }, 00:29:29.741 { 00:29:29.741 "name": "BaseBdev3", 00:29:29.741 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:29.741 "is_configured": true, 00:29:29.741 "data_offset": 0, 00:29:29.741 "data_size": 65536 00:29:29.741 }, 00:29:29.741 { 00:29:29.741 "name": "BaseBdev4", 00:29:29.741 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:29.741 "is_configured": true, 00:29:29.741 "data_offset": 0, 00:29:29.741 "data_size": 65536 00:29:29.741 } 00:29:29.741 ] 00:29:29.741 }' 00:29:29.741 11:40:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:29.741 11:40:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:30.306 11:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:30.564 [2024-07-25 11:40:46.238863] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:30.564 [2024-07-25 11:40:46.251583] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:29:30.564 [2024-07-25 11:40:46.260267] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:30.564 11:40:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:31.498 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:31.498 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:31.498 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:31.498 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:31.498 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:31.498 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:31.498 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.756 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:31.756 "name": "raid_bdev1", 00:29:31.756 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:31.756 "strip_size_kb": 64, 00:29:31.756 "state": "online", 00:29:31.756 "raid_level": "raid5f", 00:29:31.756 "superblock": false, 00:29:31.756 "num_base_bdevs": 4, 00:29:31.756 "num_base_bdevs_discovered": 4, 00:29:31.756 "num_base_bdevs_operational": 4, 00:29:31.756 "process": { 00:29:31.756 "type": "rebuild", 00:29:31.756 "target": "spare", 00:29:31.756 "progress": { 00:29:31.756 "blocks": 23040, 00:29:31.756 "percent": 11 00:29:31.756 } 00:29:31.756 }, 00:29:31.756 "base_bdevs_list": [ 00:29:31.756 { 00:29:31.756 "name": "spare", 00:29:31.756 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:31.756 "is_configured": true, 00:29:31.756 "data_offset": 0, 00:29:31.756 "data_size": 65536 00:29:31.756 }, 00:29:31.756 { 00:29:31.756 "name": "BaseBdev2", 00:29:31.756 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:31.756 "is_configured": true, 00:29:31.756 "data_offset": 0, 00:29:31.756 "data_size": 65536 00:29:31.756 }, 00:29:31.756 { 00:29:31.756 "name": "BaseBdev3", 00:29:31.756 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:31.756 "is_configured": true, 00:29:31.756 "data_offset": 0, 00:29:31.756 "data_size": 65536 00:29:31.756 }, 00:29:31.756 { 00:29:31.756 "name": "BaseBdev4", 00:29:31.756 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:31.756 "is_configured": true, 00:29:31.756 "data_offset": 0, 00:29:31.756 "data_size": 65536 00:29:31.756 } 00:29:31.756 ] 00:29:31.756 }' 00:29:31.756 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:31.756 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:31.756 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:32.014 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:32.014 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:32.014 [2024-07-25 11:40:47.874367] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:32.014 [2024-07-25 11:40:47.877247] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:32.014 [2024-07-25 11:40:47.877333] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.014 [2024-07-25 11:40:47.877366] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:32.014 [2024-07-25 11:40:47.877379] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.272 11:40:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.272 11:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:32.272 "name": "raid_bdev1", 00:29:32.272 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:32.272 "strip_size_kb": 64, 00:29:32.272 "state": "online", 00:29:32.272 "raid_level": "raid5f", 00:29:32.272 "superblock": false, 00:29:32.272 "num_base_bdevs": 4, 00:29:32.272 "num_base_bdevs_discovered": 3, 00:29:32.272 "num_base_bdevs_operational": 3, 00:29:32.272 "base_bdevs_list": [ 00:29:32.272 { 00:29:32.272 "name": null, 00:29:32.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.272 "is_configured": false, 00:29:32.272 "data_offset": 0, 00:29:32.272 "data_size": 65536 00:29:32.272 }, 00:29:32.272 { 00:29:32.272 "name": "BaseBdev2", 00:29:32.272 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:32.272 "is_configured": true, 00:29:32.272 "data_offset": 0, 00:29:32.272 "data_size": 65536 00:29:32.272 }, 00:29:32.272 { 00:29:32.272 "name": "BaseBdev3", 00:29:32.272 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:32.272 "is_configured": true, 00:29:32.272 "data_offset": 0, 00:29:32.272 "data_size": 65536 00:29:32.272 }, 00:29:32.272 { 00:29:32.272 "name": "BaseBdev4", 00:29:32.272 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:32.272 "is_configured": true, 00:29:32.272 "data_offset": 0, 00:29:32.272 "data_size": 65536 00:29:32.272 } 00:29:32.272 ] 00:29:32.272 }' 00:29:32.272 11:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:32.272 11:40:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.205 11:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:33.205 11:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:33.205 11:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:33.205 11:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:33.205 11:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:33.205 11:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.205 11:40:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.205 11:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:33.205 "name": "raid_bdev1", 00:29:33.205 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:33.205 "strip_size_kb": 64, 00:29:33.205 "state": "online", 00:29:33.205 "raid_level": "raid5f", 00:29:33.205 "superblock": false, 00:29:33.205 "num_base_bdevs": 4, 00:29:33.205 "num_base_bdevs_discovered": 3, 00:29:33.205 "num_base_bdevs_operational": 3, 00:29:33.205 "base_bdevs_list": [ 00:29:33.205 { 00:29:33.205 "name": null, 00:29:33.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:33.205 "is_configured": false, 00:29:33.205 "data_offset": 0, 00:29:33.205 "data_size": 65536 00:29:33.205 }, 00:29:33.205 { 00:29:33.205 "name": "BaseBdev2", 00:29:33.205 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:33.205 "is_configured": true, 00:29:33.205 "data_offset": 0, 00:29:33.205 "data_size": 65536 00:29:33.205 }, 00:29:33.205 { 00:29:33.205 "name": "BaseBdev3", 00:29:33.205 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:33.205 "is_configured": true, 00:29:33.205 "data_offset": 0, 00:29:33.205 "data_size": 65536 00:29:33.205 }, 00:29:33.205 { 00:29:33.205 "name": "BaseBdev4", 00:29:33.205 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:33.205 "is_configured": true, 00:29:33.205 "data_offset": 0, 00:29:33.205 "data_size": 65536 00:29:33.205 } 00:29:33.205 ] 00:29:33.205 }' 00:29:33.205 11:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:33.463 11:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:33.463 11:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:33.463 11:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:33.463 11:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:33.721 [2024-07-25 11:40:49.415498] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:33.721 [2024-07-25 11:40:49.427871] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:29:33.721 [2024-07-25 11:40:49.436465] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:33.721 11:40:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:34.664 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:34.664 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:34.664 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:34.664 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:34.664 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:34.664 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.664 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.920 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:34.920 "name": "raid_bdev1", 00:29:34.920 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:34.920 "strip_size_kb": 64, 00:29:34.920 "state": "online", 00:29:34.920 "raid_level": "raid5f", 00:29:34.920 "superblock": false, 00:29:34.920 "num_base_bdevs": 4, 00:29:34.920 "num_base_bdevs_discovered": 4, 00:29:34.920 "num_base_bdevs_operational": 4, 00:29:34.920 "process": { 00:29:34.920 "type": "rebuild", 00:29:34.920 "target": "spare", 00:29:34.921 "progress": { 00:29:34.921 "blocks": 23040, 00:29:34.921 "percent": 11 00:29:34.921 } 00:29:34.921 }, 00:29:34.921 "base_bdevs_list": [ 00:29:34.921 { 00:29:34.921 "name": "spare", 00:29:34.921 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:34.921 "is_configured": true, 00:29:34.921 "data_offset": 0, 00:29:34.921 "data_size": 65536 00:29:34.921 }, 00:29:34.921 { 00:29:34.921 "name": "BaseBdev2", 00:29:34.921 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:34.921 "is_configured": true, 00:29:34.921 "data_offset": 0, 00:29:34.921 "data_size": 65536 00:29:34.921 }, 00:29:34.921 { 00:29:34.921 "name": "BaseBdev3", 00:29:34.921 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:34.921 "is_configured": true, 00:29:34.921 "data_offset": 0, 00:29:34.921 "data_size": 65536 00:29:34.921 }, 00:29:34.921 { 00:29:34.921 "name": "BaseBdev4", 00:29:34.921 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:34.921 "is_configured": true, 00:29:34.921 "data_offset": 0, 00:29:34.921 "data_size": 65536 00:29:34.921 } 00:29:34.921 ] 00:29:34.921 }' 00:29:34.921 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:34.921 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:34.921 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:35.248 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:35.248 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:29:35.248 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:29:35.248 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:29:35.248 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=1414 00:29:35.248 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:35.249 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:35.249 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:35.249 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:35.249 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:35.249 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:35.249 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.249 11:40:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.249 11:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:35.249 "name": "raid_bdev1", 00:29:35.249 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:35.249 "strip_size_kb": 64, 00:29:35.249 "state": "online", 00:29:35.249 "raid_level": "raid5f", 00:29:35.249 "superblock": false, 00:29:35.249 "num_base_bdevs": 4, 00:29:35.249 "num_base_bdevs_discovered": 4, 00:29:35.249 "num_base_bdevs_operational": 4, 00:29:35.249 "process": { 00:29:35.249 "type": "rebuild", 00:29:35.249 "target": "spare", 00:29:35.249 "progress": { 00:29:35.249 "blocks": 28800, 00:29:35.249 "percent": 14 00:29:35.249 } 00:29:35.249 }, 00:29:35.249 "base_bdevs_list": [ 00:29:35.249 { 00:29:35.249 "name": "spare", 00:29:35.249 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:35.249 "is_configured": true, 00:29:35.249 "data_offset": 0, 00:29:35.249 "data_size": 65536 00:29:35.249 }, 00:29:35.249 { 00:29:35.249 "name": "BaseBdev2", 00:29:35.249 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:35.249 "is_configured": true, 00:29:35.249 "data_offset": 0, 00:29:35.249 "data_size": 65536 00:29:35.249 }, 00:29:35.249 { 00:29:35.249 "name": "BaseBdev3", 00:29:35.249 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:35.249 "is_configured": true, 00:29:35.249 "data_offset": 0, 00:29:35.249 "data_size": 65536 00:29:35.249 }, 00:29:35.249 { 00:29:35.249 "name": "BaseBdev4", 00:29:35.249 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:35.249 "is_configured": true, 00:29:35.249 "data_offset": 0, 00:29:35.249 "data_size": 65536 00:29:35.249 } 00:29:35.249 ] 00:29:35.249 }' 00:29:35.249 11:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:35.507 11:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:35.507 11:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:35.507 11:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:35.507 11:40:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:36.440 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:36.440 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:36.440 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:36.440 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:36.440 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:36.440 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:36.440 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.440 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.696 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.696 "name": "raid_bdev1", 00:29:36.696 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:36.696 "strip_size_kb": 64, 00:29:36.696 "state": "online", 00:29:36.696 "raid_level": "raid5f", 00:29:36.696 "superblock": false, 00:29:36.696 "num_base_bdevs": 4, 00:29:36.696 "num_base_bdevs_discovered": 4, 00:29:36.696 "num_base_bdevs_operational": 4, 00:29:36.696 "process": { 00:29:36.696 "type": "rebuild", 00:29:36.696 "target": "spare", 00:29:36.696 "progress": { 00:29:36.696 "blocks": 55680, 00:29:36.696 "percent": 28 00:29:36.696 } 00:29:36.696 }, 00:29:36.696 "base_bdevs_list": [ 00:29:36.696 { 00:29:36.696 "name": "spare", 00:29:36.696 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:36.696 "is_configured": true, 00:29:36.696 "data_offset": 0, 00:29:36.696 "data_size": 65536 00:29:36.696 }, 00:29:36.696 { 00:29:36.696 "name": "BaseBdev2", 00:29:36.696 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:36.696 "is_configured": true, 00:29:36.697 "data_offset": 0, 00:29:36.697 "data_size": 65536 00:29:36.697 }, 00:29:36.697 { 00:29:36.697 "name": "BaseBdev3", 00:29:36.697 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:36.697 "is_configured": true, 00:29:36.697 "data_offset": 0, 00:29:36.697 "data_size": 65536 00:29:36.697 }, 00:29:36.697 { 00:29:36.697 "name": "BaseBdev4", 00:29:36.697 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:36.697 "is_configured": true, 00:29:36.697 "data_offset": 0, 00:29:36.697 "data_size": 65536 00:29:36.697 } 00:29:36.697 ] 00:29:36.697 }' 00:29:36.697 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:36.697 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:36.697 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:36.697 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:36.697 11:40:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:38.068 "name": "raid_bdev1", 00:29:38.068 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:38.068 "strip_size_kb": 64, 00:29:38.068 "state": "online", 00:29:38.068 "raid_level": "raid5f", 00:29:38.068 "superblock": false, 00:29:38.068 "num_base_bdevs": 4, 00:29:38.068 "num_base_bdevs_discovered": 4, 00:29:38.068 "num_base_bdevs_operational": 4, 00:29:38.068 "process": { 00:29:38.068 "type": "rebuild", 00:29:38.068 "target": "spare", 00:29:38.068 "progress": { 00:29:38.068 "blocks": 80640, 00:29:38.068 "percent": 41 00:29:38.068 } 00:29:38.068 }, 00:29:38.068 "base_bdevs_list": [ 00:29:38.068 { 00:29:38.068 "name": "spare", 00:29:38.068 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:38.068 "is_configured": true, 00:29:38.068 "data_offset": 0, 00:29:38.068 "data_size": 65536 00:29:38.068 }, 00:29:38.068 { 00:29:38.068 "name": "BaseBdev2", 00:29:38.068 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:38.068 "is_configured": true, 00:29:38.068 "data_offset": 0, 00:29:38.068 "data_size": 65536 00:29:38.068 }, 00:29:38.068 { 00:29:38.068 "name": "BaseBdev3", 00:29:38.068 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:38.068 "is_configured": true, 00:29:38.068 "data_offset": 0, 00:29:38.068 "data_size": 65536 00:29:38.068 }, 00:29:38.068 { 00:29:38.068 "name": "BaseBdev4", 00:29:38.068 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:38.068 "is_configured": true, 00:29:38.068 "data_offset": 0, 00:29:38.068 "data_size": 65536 00:29:38.068 } 00:29:38.068 ] 00:29:38.068 }' 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:38.068 11:40:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:39.441 11:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:39.441 11:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:39.441 11:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:39.441 11:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:39.441 11:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:39.441 11:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:39.441 11:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.441 11:40:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.441 11:40:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:39.441 "name": "raid_bdev1", 00:29:39.441 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:39.441 "strip_size_kb": 64, 00:29:39.441 "state": "online", 00:29:39.441 "raid_level": "raid5f", 00:29:39.441 "superblock": false, 00:29:39.441 "num_base_bdevs": 4, 00:29:39.441 "num_base_bdevs_discovered": 4, 00:29:39.441 "num_base_bdevs_operational": 4, 00:29:39.441 "process": { 00:29:39.441 "type": "rebuild", 00:29:39.441 "target": "spare", 00:29:39.441 "progress": { 00:29:39.441 "blocks": 107520, 00:29:39.441 "percent": 54 00:29:39.441 } 00:29:39.441 }, 00:29:39.441 "base_bdevs_list": [ 00:29:39.441 { 00:29:39.441 "name": "spare", 00:29:39.441 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:39.441 "is_configured": true, 00:29:39.441 "data_offset": 0, 00:29:39.441 "data_size": 65536 00:29:39.441 }, 00:29:39.441 { 00:29:39.441 "name": "BaseBdev2", 00:29:39.441 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:39.441 "is_configured": true, 00:29:39.441 "data_offset": 0, 00:29:39.441 "data_size": 65536 00:29:39.441 }, 00:29:39.441 { 00:29:39.441 "name": "BaseBdev3", 00:29:39.441 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:39.441 "is_configured": true, 00:29:39.441 "data_offset": 0, 00:29:39.441 "data_size": 65536 00:29:39.442 }, 00:29:39.442 { 00:29:39.442 "name": "BaseBdev4", 00:29:39.442 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:39.442 "is_configured": true, 00:29:39.442 "data_offset": 0, 00:29:39.442 "data_size": 65536 00:29:39.442 } 00:29:39.442 ] 00:29:39.442 }' 00:29:39.442 11:40:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:39.442 11:40:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:39.442 11:40:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:39.442 11:40:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:39.442 11:40:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:40.376 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:40.376 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:40.376 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:40.376 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:40.376 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:40.376 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:40.376 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.376 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.657 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.657 "name": "raid_bdev1", 00:29:40.657 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:40.657 "strip_size_kb": 64, 00:29:40.657 "state": "online", 00:29:40.657 "raid_level": "raid5f", 00:29:40.657 "superblock": false, 00:29:40.657 "num_base_bdevs": 4, 00:29:40.657 "num_base_bdevs_discovered": 4, 00:29:40.657 "num_base_bdevs_operational": 4, 00:29:40.657 "process": { 00:29:40.657 "type": "rebuild", 00:29:40.657 "target": "spare", 00:29:40.657 "progress": { 00:29:40.657 "blocks": 132480, 00:29:40.657 "percent": 67 00:29:40.657 } 00:29:40.657 }, 00:29:40.657 "base_bdevs_list": [ 00:29:40.658 { 00:29:40.658 "name": "spare", 00:29:40.658 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:40.658 "is_configured": true, 00:29:40.658 "data_offset": 0, 00:29:40.658 "data_size": 65536 00:29:40.658 }, 00:29:40.658 { 00:29:40.658 "name": "BaseBdev2", 00:29:40.658 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:40.658 "is_configured": true, 00:29:40.658 "data_offset": 0, 00:29:40.658 "data_size": 65536 00:29:40.658 }, 00:29:40.658 { 00:29:40.658 "name": "BaseBdev3", 00:29:40.658 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:40.658 "is_configured": true, 00:29:40.658 "data_offset": 0, 00:29:40.658 "data_size": 65536 00:29:40.658 }, 00:29:40.658 { 00:29:40.658 "name": "BaseBdev4", 00:29:40.658 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:40.658 "is_configured": true, 00:29:40.658 "data_offset": 0, 00:29:40.658 "data_size": 65536 00:29:40.658 } 00:29:40.658 ] 00:29:40.658 }' 00:29:40.658 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.658 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:40.658 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.916 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:40.916 11:40:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:41.850 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:41.850 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:41.850 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:41.850 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:41.850 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:41.850 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:41.850 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.850 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.107 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:42.107 "name": "raid_bdev1", 00:29:42.107 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:42.107 "strip_size_kb": 64, 00:29:42.107 "state": "online", 00:29:42.107 "raid_level": "raid5f", 00:29:42.107 "superblock": false, 00:29:42.107 "num_base_bdevs": 4, 00:29:42.107 "num_base_bdevs_discovered": 4, 00:29:42.107 "num_base_bdevs_operational": 4, 00:29:42.107 "process": { 00:29:42.107 "type": "rebuild", 00:29:42.107 "target": "spare", 00:29:42.107 "progress": { 00:29:42.107 "blocks": 159360, 00:29:42.107 "percent": 81 00:29:42.107 } 00:29:42.107 }, 00:29:42.107 "base_bdevs_list": [ 00:29:42.107 { 00:29:42.107 "name": "spare", 00:29:42.107 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:42.107 "is_configured": true, 00:29:42.107 "data_offset": 0, 00:29:42.107 "data_size": 65536 00:29:42.107 }, 00:29:42.107 { 00:29:42.107 "name": "BaseBdev2", 00:29:42.107 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:42.107 "is_configured": true, 00:29:42.107 "data_offset": 0, 00:29:42.107 "data_size": 65536 00:29:42.107 }, 00:29:42.107 { 00:29:42.107 "name": "BaseBdev3", 00:29:42.107 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:42.107 "is_configured": true, 00:29:42.107 "data_offset": 0, 00:29:42.107 "data_size": 65536 00:29:42.107 }, 00:29:42.107 { 00:29:42.107 "name": "BaseBdev4", 00:29:42.107 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:42.107 "is_configured": true, 00:29:42.107 "data_offset": 0, 00:29:42.107 "data_size": 65536 00:29:42.107 } 00:29:42.107 ] 00:29:42.107 }' 00:29:42.107 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:42.107 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:42.107 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:42.108 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:42.108 11:40:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:43.537 11:40:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:43.537 11:40:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:43.537 11:40:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:43.537 11:40:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:43.537 11:40:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:43.537 11:40:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:43.537 11:40:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.537 11:40:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.537 11:40:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:43.537 "name": "raid_bdev1", 00:29:43.537 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:43.537 "strip_size_kb": 64, 00:29:43.537 "state": "online", 00:29:43.537 "raid_level": "raid5f", 00:29:43.537 "superblock": false, 00:29:43.537 "num_base_bdevs": 4, 00:29:43.537 "num_base_bdevs_discovered": 4, 00:29:43.537 "num_base_bdevs_operational": 4, 00:29:43.537 "process": { 00:29:43.537 "type": "rebuild", 00:29:43.537 "target": "spare", 00:29:43.537 "progress": { 00:29:43.537 "blocks": 184320, 00:29:43.537 "percent": 93 00:29:43.537 } 00:29:43.537 }, 00:29:43.537 "base_bdevs_list": [ 00:29:43.537 { 00:29:43.537 "name": "spare", 00:29:43.537 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:43.537 "is_configured": true, 00:29:43.537 "data_offset": 0, 00:29:43.537 "data_size": 65536 00:29:43.537 }, 00:29:43.537 { 00:29:43.537 "name": "BaseBdev2", 00:29:43.537 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:43.537 "is_configured": true, 00:29:43.537 "data_offset": 0, 00:29:43.537 "data_size": 65536 00:29:43.537 }, 00:29:43.537 { 00:29:43.537 "name": "BaseBdev3", 00:29:43.537 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:43.537 "is_configured": true, 00:29:43.537 "data_offset": 0, 00:29:43.537 "data_size": 65536 00:29:43.537 }, 00:29:43.537 { 00:29:43.537 "name": "BaseBdev4", 00:29:43.537 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:43.537 "is_configured": true, 00:29:43.537 "data_offset": 0, 00:29:43.537 "data_size": 65536 00:29:43.537 } 00:29:43.537 ] 00:29:43.537 }' 00:29:43.537 11:40:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:43.537 11:40:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:43.537 11:40:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:43.537 11:40:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:43.537 11:40:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:44.103 [2024-07-25 11:40:59.842666] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:44.103 [2024-07-25 11:40:59.842787] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:44.103 [2024-07-25 11:40:59.842846] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:44.669 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:44.669 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:44.669 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.669 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:44.669 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:44.669 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.669 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.669 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.926 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.926 "name": "raid_bdev1", 00:29:44.926 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:44.926 "strip_size_kb": 64, 00:29:44.926 "state": "online", 00:29:44.926 "raid_level": "raid5f", 00:29:44.926 "superblock": false, 00:29:44.926 "num_base_bdevs": 4, 00:29:44.926 "num_base_bdevs_discovered": 4, 00:29:44.926 "num_base_bdevs_operational": 4, 00:29:44.926 "base_bdevs_list": [ 00:29:44.927 { 00:29:44.927 "name": "spare", 00:29:44.927 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:44.927 "is_configured": true, 00:29:44.927 "data_offset": 0, 00:29:44.927 "data_size": 65536 00:29:44.927 }, 00:29:44.927 { 00:29:44.927 "name": "BaseBdev2", 00:29:44.927 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:44.927 "is_configured": true, 00:29:44.927 "data_offset": 0, 00:29:44.927 "data_size": 65536 00:29:44.927 }, 00:29:44.927 { 00:29:44.927 "name": "BaseBdev3", 00:29:44.927 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:44.927 "is_configured": true, 00:29:44.927 "data_offset": 0, 00:29:44.927 "data_size": 65536 00:29:44.927 }, 00:29:44.927 { 00:29:44.927 "name": "BaseBdev4", 00:29:44.927 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:44.927 "is_configured": true, 00:29:44.927 "data_offset": 0, 00:29:44.927 "data_size": 65536 00:29:44.927 } 00:29:44.927 ] 00:29:44.927 }' 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.927 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.184 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:45.184 "name": "raid_bdev1", 00:29:45.184 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:45.184 "strip_size_kb": 64, 00:29:45.184 "state": "online", 00:29:45.184 "raid_level": "raid5f", 00:29:45.184 "superblock": false, 00:29:45.184 "num_base_bdevs": 4, 00:29:45.184 "num_base_bdevs_discovered": 4, 00:29:45.184 "num_base_bdevs_operational": 4, 00:29:45.184 "base_bdevs_list": [ 00:29:45.184 { 00:29:45.184 "name": "spare", 00:29:45.184 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:45.184 "is_configured": true, 00:29:45.184 "data_offset": 0, 00:29:45.184 "data_size": 65536 00:29:45.184 }, 00:29:45.184 { 00:29:45.184 "name": "BaseBdev2", 00:29:45.184 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:45.184 "is_configured": true, 00:29:45.184 "data_offset": 0, 00:29:45.184 "data_size": 65536 00:29:45.184 }, 00:29:45.184 { 00:29:45.184 "name": "BaseBdev3", 00:29:45.184 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:45.184 "is_configured": true, 00:29:45.184 "data_offset": 0, 00:29:45.184 "data_size": 65536 00:29:45.184 }, 00:29:45.184 { 00:29:45.184 "name": "BaseBdev4", 00:29:45.185 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:45.185 "is_configured": true, 00:29:45.185 "data_offset": 0, 00:29:45.185 "data_size": 65536 00:29:45.185 } 00:29:45.185 ] 00:29:45.185 }' 00:29:45.185 11:41:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.185 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.749 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:45.749 "name": "raid_bdev1", 00:29:45.749 "uuid": "a767f073-ee20-4ef8-a609-64e00134ed1b", 00:29:45.749 "strip_size_kb": 64, 00:29:45.749 "state": "online", 00:29:45.749 "raid_level": "raid5f", 00:29:45.749 "superblock": false, 00:29:45.749 "num_base_bdevs": 4, 00:29:45.749 "num_base_bdevs_discovered": 4, 00:29:45.749 "num_base_bdevs_operational": 4, 00:29:45.750 "base_bdevs_list": [ 00:29:45.750 { 00:29:45.750 "name": "spare", 00:29:45.750 "uuid": "637c69b2-3161-55e4-8c8d-f90b408e1015", 00:29:45.750 "is_configured": true, 00:29:45.750 "data_offset": 0, 00:29:45.750 "data_size": 65536 00:29:45.750 }, 00:29:45.750 { 00:29:45.750 "name": "BaseBdev2", 00:29:45.750 "uuid": "eceefb1b-d535-5832-b963-7085381d3c03", 00:29:45.750 "is_configured": true, 00:29:45.750 "data_offset": 0, 00:29:45.750 "data_size": 65536 00:29:45.750 }, 00:29:45.750 { 00:29:45.750 "name": "BaseBdev3", 00:29:45.750 "uuid": "599c746c-3c6c-53d8-94b9-86447fb985e2", 00:29:45.750 "is_configured": true, 00:29:45.750 "data_offset": 0, 00:29:45.750 "data_size": 65536 00:29:45.750 }, 00:29:45.750 { 00:29:45.750 "name": "BaseBdev4", 00:29:45.750 "uuid": "f9a765b1-b19a-55d8-a3ed-d7bb695afc53", 00:29:45.750 "is_configured": true, 00:29:45.750 "data_offset": 0, 00:29:45.750 "data_size": 65536 00:29:45.750 } 00:29:45.750 ] 00:29:45.750 }' 00:29:45.750 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:45.750 11:41:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.315 11:41:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:46.315 [2024-07-25 11:41:02.188005] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:46.315 [2024-07-25 11:41:02.188078] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:46.315 [2024-07-25 11:41:02.188186] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:46.315 [2024-07-25 11:41:02.188303] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:46.315 [2024-07-25 11:41:02.188324] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:46.572 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.572 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:46.830 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:46.830 /dev/nbd0 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:47.088 1+0 records in 00:29:47.088 1+0 records out 00:29:47.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633447 s, 6.5 MB/s 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:47.088 11:41:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:47.346 /dev/nbd1 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:47.346 1+0 records in 00:29:47.346 1+0 records out 00:29:47.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438404 s, 9.3 MB/s 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:47.346 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:47.603 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:47.603 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:47.603 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:47.604 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:47.604 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:47.604 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:47.604 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:47.861 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:47.861 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:47.861 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:47.861 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:47.861 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:47.862 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:47.862 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:47.862 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:47.862 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:47.862 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:48.119 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:48.119 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:48.119 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:48.119 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:48.119 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:48.119 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:48.119 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 98424 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 98424 ']' 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 98424 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98424 00:29:48.120 killing process with pid 98424 00:29:48.120 Received shutdown signal, test time was about 60.000000 seconds 00:29:48.120 00:29:48.120 Latency(us) 00:29:48.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.120 =================================================================================================================== 00:29:48.120 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98424' 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 98424 00:29:48.120 11:41:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 98424 00:29:48.120 [2024-07-25 11:41:03.854116] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:48.686 [2024-07-25 11:41:04.288158] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:49.620 ************************************ 00:29:49.620 END TEST raid5f_rebuild_test 00:29:49.620 ************************************ 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:29:49.620 00:29:49.620 real 0m27.360s 00:29:49.620 user 0m39.937s 00:29:49.620 sys 0m3.355s 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.620 11:41:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:29:49.620 11:41:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:49.620 11:41:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:49.620 11:41:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:49.620 ************************************ 00:29:49.620 START TEST raid5f_rebuild_test_sb 00:29:49.620 ************************************ 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid5f 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid5f '!=' raid1 ']' 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # '[' false = true ']' 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # strip_size=64 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # create_arg+=' -z 64' 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:29:49.620 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:29:49.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:49.621 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=99000 00:29:49.621 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 99000 /var/tmp/spdk-raid.sock 00:29:49.621 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:49.621 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 99000 ']' 00:29:49.621 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:49.621 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:49.621 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:49.621 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:49.621 11:41:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:49.878 [2024-07-25 11:41:05.604868] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:29:49.878 [2024-07-25 11:41:05.605404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:29:49.878 Zero copy mechanism will not be used. 00:29:49.878 -allocations --file-prefix=spdk_pid99000 ] 00:29:50.136 [2024-07-25 11:41:05.789294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.394 [2024-07-25 11:41:06.032464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.394 [2024-07-25 11:41:06.231779] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:50.394 [2024-07-25 11:41:06.231829] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:50.960 11:41:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:50.960 11:41:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:29:50.960 11:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:50.960 11:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:50.960 BaseBdev1_malloc 00:29:51.220 11:41:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:51.220 [2024-07-25 11:41:07.059807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:51.220 [2024-07-25 11:41:07.059896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:51.220 [2024-07-25 11:41:07.059936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:51.220 [2024-07-25 11:41:07.059953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:51.220 [2024-07-25 11:41:07.062841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:51.220 [2024-07-25 11:41:07.062886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:51.220 BaseBdev1 00:29:51.220 11:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:51.220 11:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:51.489 BaseBdev2_malloc 00:29:51.489 11:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:51.748 [2024-07-25 11:41:07.572999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:51.748 [2024-07-25 11:41:07.573126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:51.748 [2024-07-25 11:41:07.573167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:51.748 [2024-07-25 11:41:07.573184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:51.748 [2024-07-25 11:41:07.576005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:51.748 [2024-07-25 11:41:07.576062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:51.748 BaseBdev2 00:29:51.748 11:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:51.748 11:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:52.007 BaseBdev3_malloc 00:29:52.007 11:41:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:52.626 [2024-07-25 11:41:08.157402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:52.626 [2024-07-25 11:41:08.157494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:52.627 [2024-07-25 11:41:08.157536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:52.627 [2024-07-25 11:41:08.157553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:52.627 [2024-07-25 11:41:08.160451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:52.627 [2024-07-25 11:41:08.160499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:52.627 BaseBdev3 00:29:52.627 11:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:52.627 11:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:52.627 BaseBdev4_malloc 00:29:52.627 11:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:52.885 [2024-07-25 11:41:08.756186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:52.885 [2024-07-25 11:41:08.756295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:52.885 [2024-07-25 11:41:08.756335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:29:52.885 [2024-07-25 11:41:08.756353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:52.885 [2024-07-25 11:41:08.759327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:52.885 [2024-07-25 11:41:08.759375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:52.885 BaseBdev4 00:29:53.144 11:41:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:53.144 spare_malloc 00:29:53.402 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:53.402 spare_delay 00:29:53.402 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:53.660 [2024-07-25 11:41:09.524344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:53.660 [2024-07-25 11:41:09.524441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:53.660 [2024-07-25 11:41:09.524482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:53.660 [2024-07-25 11:41:09.524500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:53.660 [2024-07-25 11:41:09.527499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:53.660 [2024-07-25 11:41:09.527544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:53.660 spare 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:53.919 [2024-07-25 11:41:09.760639] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:53.919 [2024-07-25 11:41:09.763520] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:53.919 [2024-07-25 11:41:09.763833] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:53.919 [2024-07-25 11:41:09.764039] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:53.919 [2024-07-25 11:41:09.764477] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:53.919 [2024-07-25 11:41:09.764643] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:53.919 [2024-07-25 11:41:09.765221] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:53.919 [2024-07-25 11:41:09.772512] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:53.919 [2024-07-25 11:41:09.772703] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:53.919 [2024-07-25 11:41:09.773078] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.919 11:41:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.177 11:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:54.177 "name": "raid_bdev1", 00:29:54.177 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:29:54.177 "strip_size_kb": 64, 00:29:54.177 "state": "online", 00:29:54.177 "raid_level": "raid5f", 00:29:54.177 "superblock": true, 00:29:54.177 "num_base_bdevs": 4, 00:29:54.177 "num_base_bdevs_discovered": 4, 00:29:54.177 "num_base_bdevs_operational": 4, 00:29:54.177 "base_bdevs_list": [ 00:29:54.177 { 00:29:54.177 "name": "BaseBdev1", 00:29:54.177 "uuid": "9ac75bc8-e712-504f-978d-5c0c64b71d6b", 00:29:54.177 "is_configured": true, 00:29:54.177 "data_offset": 2048, 00:29:54.177 "data_size": 63488 00:29:54.177 }, 00:29:54.177 { 00:29:54.177 "name": "BaseBdev2", 00:29:54.177 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:29:54.177 "is_configured": true, 00:29:54.177 "data_offset": 2048, 00:29:54.177 "data_size": 63488 00:29:54.177 }, 00:29:54.177 { 00:29:54.177 "name": "BaseBdev3", 00:29:54.177 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:29:54.177 "is_configured": true, 00:29:54.177 "data_offset": 2048, 00:29:54.177 "data_size": 63488 00:29:54.177 }, 00:29:54.177 { 00:29:54.177 "name": "BaseBdev4", 00:29:54.177 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:29:54.177 "is_configured": true, 00:29:54.177 "data_offset": 2048, 00:29:54.177 "data_size": 63488 00:29:54.177 } 00:29:54.177 ] 00:29:54.177 }' 00:29:54.177 11:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:54.177 11:41:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.110 11:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:55.110 11:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:55.110 [2024-07-25 11:41:10.873447] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:55.110 11:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=190464 00:29:55.110 11:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:55.110 11:41:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:55.368 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:55.627 [2024-07-25 11:41:11.413391] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:55.627 /dev/nbd0 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.627 1+0 records in 00:29:55.627 1+0 records out 00:29:55.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045704 s, 9.0 MB/s 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid5f ']' 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # write_unit_size=384 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # echo 192 00:29:55.627 11:41:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:29:56.193 496+0 records in 00:29:56.193 496+0 records out 00:29:56.193 97517568 bytes (98 MB, 93 MiB) copied, 0.592833 s, 164 MB/s 00:29:56.193 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:56.193 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:56.193 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:56.193 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:56.193 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:56.193 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:56.193 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:56.451 [2024-07-25 11:41:12.320186] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:56.709 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:56.709 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:56.709 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:56.709 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:56.709 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:56.709 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:56.709 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:56.709 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:56.709 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:56.967 [2024-07-25 11:41:12.601975] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.967 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.225 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:57.225 "name": "raid_bdev1", 00:29:57.225 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:29:57.225 "strip_size_kb": 64, 00:29:57.225 "state": "online", 00:29:57.225 "raid_level": "raid5f", 00:29:57.225 "superblock": true, 00:29:57.225 "num_base_bdevs": 4, 00:29:57.225 "num_base_bdevs_discovered": 3, 00:29:57.225 "num_base_bdevs_operational": 3, 00:29:57.225 "base_bdevs_list": [ 00:29:57.225 { 00:29:57.225 "name": null, 00:29:57.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.225 "is_configured": false, 00:29:57.225 "data_offset": 2048, 00:29:57.225 "data_size": 63488 00:29:57.225 }, 00:29:57.225 { 00:29:57.225 "name": "BaseBdev2", 00:29:57.225 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:29:57.225 "is_configured": true, 00:29:57.225 "data_offset": 2048, 00:29:57.225 "data_size": 63488 00:29:57.225 }, 00:29:57.225 { 00:29:57.225 "name": "BaseBdev3", 00:29:57.225 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:29:57.225 "is_configured": true, 00:29:57.225 "data_offset": 2048, 00:29:57.225 "data_size": 63488 00:29:57.225 }, 00:29:57.225 { 00:29:57.225 "name": "BaseBdev4", 00:29:57.225 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:29:57.225 "is_configured": true, 00:29:57.225 "data_offset": 2048, 00:29:57.225 "data_size": 63488 00:29:57.225 } 00:29:57.225 ] 00:29:57.225 }' 00:29:57.225 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:57.225 11:41:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:57.838 11:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:58.097 [2024-07-25 11:41:13.794361] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:58.097 [2024-07-25 11:41:13.808768] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:29:58.097 [2024-07-25 11:41:13.818217] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:58.097 11:41:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:59.038 11:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:59.038 11:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:59.038 11:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:59.038 11:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:59.038 11:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:59.039 11:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.039 11:41:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.297 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:59.297 "name": "raid_bdev1", 00:29:59.297 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:29:59.297 "strip_size_kb": 64, 00:29:59.297 "state": "online", 00:29:59.297 "raid_level": "raid5f", 00:29:59.297 "superblock": true, 00:29:59.297 "num_base_bdevs": 4, 00:29:59.297 "num_base_bdevs_discovered": 4, 00:29:59.297 "num_base_bdevs_operational": 4, 00:29:59.297 "process": { 00:29:59.297 "type": "rebuild", 00:29:59.297 "target": "spare", 00:29:59.297 "progress": { 00:29:59.297 "blocks": 23040, 00:29:59.297 "percent": 12 00:29:59.297 } 00:29:59.297 }, 00:29:59.297 "base_bdevs_list": [ 00:29:59.297 { 00:29:59.297 "name": "spare", 00:29:59.297 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:29:59.297 "is_configured": true, 00:29:59.297 "data_offset": 2048, 00:29:59.297 "data_size": 63488 00:29:59.297 }, 00:29:59.297 { 00:29:59.297 "name": "BaseBdev2", 00:29:59.297 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:29:59.297 "is_configured": true, 00:29:59.297 "data_offset": 2048, 00:29:59.297 "data_size": 63488 00:29:59.297 }, 00:29:59.297 { 00:29:59.297 "name": "BaseBdev3", 00:29:59.297 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:29:59.297 "is_configured": true, 00:29:59.297 "data_offset": 2048, 00:29:59.297 "data_size": 63488 00:29:59.297 }, 00:29:59.297 { 00:29:59.297 "name": "BaseBdev4", 00:29:59.297 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:29:59.297 "is_configured": true, 00:29:59.297 "data_offset": 2048, 00:29:59.297 "data_size": 63488 00:29:59.297 } 00:29:59.297 ] 00:29:59.297 }' 00:29:59.297 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:59.297 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:59.297 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:59.555 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:59.555 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:59.555 [2024-07-25 11:41:15.416728] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:59.555 [2024-07-25 11:41:15.435811] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:59.555 [2024-07-25 11:41:15.435922] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:59.555 [2024-07-25 11:41:15.435970] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:59.555 [2024-07-25 11:41:15.435988] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.814 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.072 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:00.072 "name": "raid_bdev1", 00:30:00.072 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:00.072 "strip_size_kb": 64, 00:30:00.072 "state": "online", 00:30:00.072 "raid_level": "raid5f", 00:30:00.072 "superblock": true, 00:30:00.072 "num_base_bdevs": 4, 00:30:00.072 "num_base_bdevs_discovered": 3, 00:30:00.072 "num_base_bdevs_operational": 3, 00:30:00.072 "base_bdevs_list": [ 00:30:00.072 { 00:30:00.072 "name": null, 00:30:00.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.072 "is_configured": false, 00:30:00.072 "data_offset": 2048, 00:30:00.072 "data_size": 63488 00:30:00.072 }, 00:30:00.072 { 00:30:00.072 "name": "BaseBdev2", 00:30:00.072 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:00.072 "is_configured": true, 00:30:00.072 "data_offset": 2048, 00:30:00.072 "data_size": 63488 00:30:00.072 }, 00:30:00.072 { 00:30:00.072 "name": "BaseBdev3", 00:30:00.072 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:00.072 "is_configured": true, 00:30:00.072 "data_offset": 2048, 00:30:00.072 "data_size": 63488 00:30:00.072 }, 00:30:00.072 { 00:30:00.072 "name": "BaseBdev4", 00:30:00.072 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:00.072 "is_configured": true, 00:30:00.072 "data_offset": 2048, 00:30:00.072 "data_size": 63488 00:30:00.072 } 00:30:00.072 ] 00:30:00.072 }' 00:30:00.072 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:00.072 11:41:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.638 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:00.638 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:00.638 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:00.638 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:00.638 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:00.638 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.638 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.896 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:00.896 "name": "raid_bdev1", 00:30:00.896 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:00.896 "strip_size_kb": 64, 00:30:00.896 "state": "online", 00:30:00.896 "raid_level": "raid5f", 00:30:00.896 "superblock": true, 00:30:00.896 "num_base_bdevs": 4, 00:30:00.896 "num_base_bdevs_discovered": 3, 00:30:00.896 "num_base_bdevs_operational": 3, 00:30:00.896 "base_bdevs_list": [ 00:30:00.896 { 00:30:00.896 "name": null, 00:30:00.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.896 "is_configured": false, 00:30:00.896 "data_offset": 2048, 00:30:00.896 "data_size": 63488 00:30:00.896 }, 00:30:00.896 { 00:30:00.896 "name": "BaseBdev2", 00:30:00.896 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:00.896 "is_configured": true, 00:30:00.896 "data_offset": 2048, 00:30:00.896 "data_size": 63488 00:30:00.896 }, 00:30:00.896 { 00:30:00.896 "name": "BaseBdev3", 00:30:00.896 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:00.896 "is_configured": true, 00:30:00.896 "data_offset": 2048, 00:30:00.896 "data_size": 63488 00:30:00.896 }, 00:30:00.896 { 00:30:00.896 "name": "BaseBdev4", 00:30:00.896 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:00.896 "is_configured": true, 00:30:00.896 "data_offset": 2048, 00:30:00.896 "data_size": 63488 00:30:00.896 } 00:30:00.896 ] 00:30:00.896 }' 00:30:00.896 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:00.896 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:00.896 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:00.896 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:00.896 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:01.154 [2024-07-25 11:41:16.926981] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:01.154 [2024-07-25 11:41:16.939595] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:30:01.154 [2024-07-25 11:41:16.948162] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:01.154 11:41:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:30:02.089 11:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:02.089 11:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:02.089 11:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:02.089 11:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:02.089 11:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:02.089 11:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:02.089 11:41:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:02.655 "name": "raid_bdev1", 00:30:02.655 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:02.655 "strip_size_kb": 64, 00:30:02.655 "state": "online", 00:30:02.655 "raid_level": "raid5f", 00:30:02.655 "superblock": true, 00:30:02.655 "num_base_bdevs": 4, 00:30:02.655 "num_base_bdevs_discovered": 4, 00:30:02.655 "num_base_bdevs_operational": 4, 00:30:02.655 "process": { 00:30:02.655 "type": "rebuild", 00:30:02.655 "target": "spare", 00:30:02.655 "progress": { 00:30:02.655 "blocks": 23040, 00:30:02.655 "percent": 12 00:30:02.655 } 00:30:02.655 }, 00:30:02.655 "base_bdevs_list": [ 00:30:02.655 { 00:30:02.655 "name": "spare", 00:30:02.655 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:02.655 "is_configured": true, 00:30:02.655 "data_offset": 2048, 00:30:02.655 "data_size": 63488 00:30:02.655 }, 00:30:02.655 { 00:30:02.655 "name": "BaseBdev2", 00:30:02.655 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:02.655 "is_configured": true, 00:30:02.655 "data_offset": 2048, 00:30:02.655 "data_size": 63488 00:30:02.655 }, 00:30:02.655 { 00:30:02.655 "name": "BaseBdev3", 00:30:02.655 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:02.655 "is_configured": true, 00:30:02.655 "data_offset": 2048, 00:30:02.655 "data_size": 63488 00:30:02.655 }, 00:30:02.655 { 00:30:02.655 "name": "BaseBdev4", 00:30:02.655 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:02.655 "is_configured": true, 00:30:02.655 "data_offset": 2048, 00:30:02.655 "data_size": 63488 00:30:02.655 } 00:30:02.655 ] 00:30:02.655 }' 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:30:02.655 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid5f = raid1 ']' 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1442 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:02.655 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:02.656 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:02.656 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:02.656 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.913 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:02.913 "name": "raid_bdev1", 00:30:02.913 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:02.913 "strip_size_kb": 64, 00:30:02.913 "state": "online", 00:30:02.913 "raid_level": "raid5f", 00:30:02.913 "superblock": true, 00:30:02.913 "num_base_bdevs": 4, 00:30:02.913 "num_base_bdevs_discovered": 4, 00:30:02.913 "num_base_bdevs_operational": 4, 00:30:02.913 "process": { 00:30:02.913 "type": "rebuild", 00:30:02.913 "target": "spare", 00:30:02.913 "progress": { 00:30:02.913 "blocks": 30720, 00:30:02.913 "percent": 16 00:30:02.913 } 00:30:02.913 }, 00:30:02.913 "base_bdevs_list": [ 00:30:02.913 { 00:30:02.913 "name": "spare", 00:30:02.914 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:02.914 "is_configured": true, 00:30:02.914 "data_offset": 2048, 00:30:02.914 "data_size": 63488 00:30:02.914 }, 00:30:02.914 { 00:30:02.914 "name": "BaseBdev2", 00:30:02.914 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:02.914 "is_configured": true, 00:30:02.914 "data_offset": 2048, 00:30:02.914 "data_size": 63488 00:30:02.914 }, 00:30:02.914 { 00:30:02.914 "name": "BaseBdev3", 00:30:02.914 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:02.914 "is_configured": true, 00:30:02.914 "data_offset": 2048, 00:30:02.914 "data_size": 63488 00:30:02.914 }, 00:30:02.914 { 00:30:02.914 "name": "BaseBdev4", 00:30:02.914 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:02.914 "is_configured": true, 00:30:02.914 "data_offset": 2048, 00:30:02.914 "data_size": 63488 00:30:02.914 } 00:30:02.914 ] 00:30:02.914 }' 00:30:02.914 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:02.914 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:02.914 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:02.914 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:02.914 11:41:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:03.847 11:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:03.847 11:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:03.847 11:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:03.847 11:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:03.847 11:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:03.847 11:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:03.847 11:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.847 11:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.413 11:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:04.413 "name": "raid_bdev1", 00:30:04.413 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:04.413 "strip_size_kb": 64, 00:30:04.413 "state": "online", 00:30:04.413 "raid_level": "raid5f", 00:30:04.413 "superblock": true, 00:30:04.413 "num_base_bdevs": 4, 00:30:04.413 "num_base_bdevs_discovered": 4, 00:30:04.413 "num_base_bdevs_operational": 4, 00:30:04.413 "process": { 00:30:04.413 "type": "rebuild", 00:30:04.413 "target": "spare", 00:30:04.413 "progress": { 00:30:04.413 "blocks": 55680, 00:30:04.413 "percent": 29 00:30:04.413 } 00:30:04.413 }, 00:30:04.413 "base_bdevs_list": [ 00:30:04.413 { 00:30:04.413 "name": "spare", 00:30:04.413 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:04.413 "is_configured": true, 00:30:04.413 "data_offset": 2048, 00:30:04.413 "data_size": 63488 00:30:04.413 }, 00:30:04.413 { 00:30:04.413 "name": "BaseBdev2", 00:30:04.413 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:04.413 "is_configured": true, 00:30:04.413 "data_offset": 2048, 00:30:04.413 "data_size": 63488 00:30:04.413 }, 00:30:04.413 { 00:30:04.413 "name": "BaseBdev3", 00:30:04.413 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:04.413 "is_configured": true, 00:30:04.413 "data_offset": 2048, 00:30:04.414 "data_size": 63488 00:30:04.414 }, 00:30:04.414 { 00:30:04.414 "name": "BaseBdev4", 00:30:04.414 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:04.414 "is_configured": true, 00:30:04.414 "data_offset": 2048, 00:30:04.414 "data_size": 63488 00:30:04.414 } 00:30:04.414 ] 00:30:04.414 }' 00:30:04.414 11:41:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:04.414 11:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:04.414 11:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:04.414 11:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:04.414 11:41:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:05.348 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:05.348 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:05.348 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:05.348 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:05.348 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:05.348 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:05.348 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.349 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.606 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:05.607 "name": "raid_bdev1", 00:30:05.607 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:05.607 "strip_size_kb": 64, 00:30:05.607 "state": "online", 00:30:05.607 "raid_level": "raid5f", 00:30:05.607 "superblock": true, 00:30:05.607 "num_base_bdevs": 4, 00:30:05.607 "num_base_bdevs_discovered": 4, 00:30:05.607 "num_base_bdevs_operational": 4, 00:30:05.607 "process": { 00:30:05.607 "type": "rebuild", 00:30:05.607 "target": "spare", 00:30:05.607 "progress": { 00:30:05.607 "blocks": 82560, 00:30:05.607 "percent": 43 00:30:05.607 } 00:30:05.607 }, 00:30:05.607 "base_bdevs_list": [ 00:30:05.607 { 00:30:05.607 "name": "spare", 00:30:05.607 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:05.607 "is_configured": true, 00:30:05.607 "data_offset": 2048, 00:30:05.607 "data_size": 63488 00:30:05.607 }, 00:30:05.607 { 00:30:05.607 "name": "BaseBdev2", 00:30:05.607 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:05.607 "is_configured": true, 00:30:05.607 "data_offset": 2048, 00:30:05.607 "data_size": 63488 00:30:05.607 }, 00:30:05.607 { 00:30:05.607 "name": "BaseBdev3", 00:30:05.607 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:05.607 "is_configured": true, 00:30:05.607 "data_offset": 2048, 00:30:05.607 "data_size": 63488 00:30:05.607 }, 00:30:05.607 { 00:30:05.607 "name": "BaseBdev4", 00:30:05.607 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:05.607 "is_configured": true, 00:30:05.607 "data_offset": 2048, 00:30:05.607 "data_size": 63488 00:30:05.607 } 00:30:05.607 ] 00:30:05.607 }' 00:30:05.607 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:05.607 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:05.607 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:05.607 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:05.607 11:41:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:06.980 "name": "raid_bdev1", 00:30:06.980 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:06.980 "strip_size_kb": 64, 00:30:06.980 "state": "online", 00:30:06.980 "raid_level": "raid5f", 00:30:06.980 "superblock": true, 00:30:06.980 "num_base_bdevs": 4, 00:30:06.980 "num_base_bdevs_discovered": 4, 00:30:06.980 "num_base_bdevs_operational": 4, 00:30:06.980 "process": { 00:30:06.980 "type": "rebuild", 00:30:06.980 "target": "spare", 00:30:06.980 "progress": { 00:30:06.980 "blocks": 107520, 00:30:06.980 "percent": 56 00:30:06.980 } 00:30:06.980 }, 00:30:06.980 "base_bdevs_list": [ 00:30:06.980 { 00:30:06.980 "name": "spare", 00:30:06.980 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:06.980 "is_configured": true, 00:30:06.980 "data_offset": 2048, 00:30:06.980 "data_size": 63488 00:30:06.980 }, 00:30:06.980 { 00:30:06.980 "name": "BaseBdev2", 00:30:06.980 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:06.980 "is_configured": true, 00:30:06.980 "data_offset": 2048, 00:30:06.980 "data_size": 63488 00:30:06.980 }, 00:30:06.980 { 00:30:06.980 "name": "BaseBdev3", 00:30:06.980 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:06.980 "is_configured": true, 00:30:06.980 "data_offset": 2048, 00:30:06.980 "data_size": 63488 00:30:06.980 }, 00:30:06.980 { 00:30:06.980 "name": "BaseBdev4", 00:30:06.980 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:06.980 "is_configured": true, 00:30:06.980 "data_offset": 2048, 00:30:06.980 "data_size": 63488 00:30:06.980 } 00:30:06.980 ] 00:30:06.980 }' 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:06.980 11:41:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:08.353 11:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:08.353 11:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:08.353 11:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:08.353 11:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:08.353 11:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:08.353 11:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:08.353 11:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.353 11:41:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.353 11:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:08.353 "name": "raid_bdev1", 00:30:08.353 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:08.353 "strip_size_kb": 64, 00:30:08.353 "state": "online", 00:30:08.353 "raid_level": "raid5f", 00:30:08.353 "superblock": true, 00:30:08.353 "num_base_bdevs": 4, 00:30:08.353 "num_base_bdevs_discovered": 4, 00:30:08.353 "num_base_bdevs_operational": 4, 00:30:08.353 "process": { 00:30:08.353 "type": "rebuild", 00:30:08.353 "target": "spare", 00:30:08.353 "progress": { 00:30:08.353 "blocks": 132480, 00:30:08.353 "percent": 69 00:30:08.353 } 00:30:08.353 }, 00:30:08.353 "base_bdevs_list": [ 00:30:08.353 { 00:30:08.353 "name": "spare", 00:30:08.353 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:08.353 "is_configured": true, 00:30:08.353 "data_offset": 2048, 00:30:08.353 "data_size": 63488 00:30:08.353 }, 00:30:08.353 { 00:30:08.353 "name": "BaseBdev2", 00:30:08.353 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:08.353 "is_configured": true, 00:30:08.353 "data_offset": 2048, 00:30:08.353 "data_size": 63488 00:30:08.353 }, 00:30:08.353 { 00:30:08.353 "name": "BaseBdev3", 00:30:08.353 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:08.353 "is_configured": true, 00:30:08.353 "data_offset": 2048, 00:30:08.353 "data_size": 63488 00:30:08.353 }, 00:30:08.353 { 00:30:08.353 "name": "BaseBdev4", 00:30:08.353 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:08.353 "is_configured": true, 00:30:08.353 "data_offset": 2048, 00:30:08.353 "data_size": 63488 00:30:08.353 } 00:30:08.353 ] 00:30:08.353 }' 00:30:08.353 11:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:08.353 11:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:08.353 11:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:08.353 11:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:08.353 11:41:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:09.289 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:09.289 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:09.289 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:09.289 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:09.289 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:09.289 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:09.289 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.289 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.548 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:09.548 "name": "raid_bdev1", 00:30:09.548 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:09.548 "strip_size_kb": 64, 00:30:09.548 "state": "online", 00:30:09.548 "raid_level": "raid5f", 00:30:09.548 "superblock": true, 00:30:09.548 "num_base_bdevs": 4, 00:30:09.548 "num_base_bdevs_discovered": 4, 00:30:09.548 "num_base_bdevs_operational": 4, 00:30:09.548 "process": { 00:30:09.548 "type": "rebuild", 00:30:09.548 "target": "spare", 00:30:09.548 "progress": { 00:30:09.548 "blocks": 159360, 00:30:09.548 "percent": 83 00:30:09.548 } 00:30:09.548 }, 00:30:09.548 "base_bdevs_list": [ 00:30:09.548 { 00:30:09.548 "name": "spare", 00:30:09.548 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:09.548 "is_configured": true, 00:30:09.548 "data_offset": 2048, 00:30:09.548 "data_size": 63488 00:30:09.548 }, 00:30:09.548 { 00:30:09.548 "name": "BaseBdev2", 00:30:09.548 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:09.548 "is_configured": true, 00:30:09.548 "data_offset": 2048, 00:30:09.548 "data_size": 63488 00:30:09.548 }, 00:30:09.548 { 00:30:09.548 "name": "BaseBdev3", 00:30:09.548 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:09.548 "is_configured": true, 00:30:09.548 "data_offset": 2048, 00:30:09.548 "data_size": 63488 00:30:09.548 }, 00:30:09.548 { 00:30:09.548 "name": "BaseBdev4", 00:30:09.548 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:09.548 "is_configured": true, 00:30:09.548 "data_offset": 2048, 00:30:09.548 "data_size": 63488 00:30:09.548 } 00:30:09.548 ] 00:30:09.548 }' 00:30:09.548 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:09.807 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:09.807 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:09.807 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:09.807 11:41:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:10.742 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:10.742 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:10.742 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:10.742 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:10.742 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:10.742 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:10.742 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.742 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.000 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:11.000 "name": "raid_bdev1", 00:30:11.000 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:11.000 "strip_size_kb": 64, 00:30:11.000 "state": "online", 00:30:11.001 "raid_level": "raid5f", 00:30:11.001 "superblock": true, 00:30:11.001 "num_base_bdevs": 4, 00:30:11.001 "num_base_bdevs_discovered": 4, 00:30:11.001 "num_base_bdevs_operational": 4, 00:30:11.001 "process": { 00:30:11.001 "type": "rebuild", 00:30:11.001 "target": "spare", 00:30:11.001 "progress": { 00:30:11.001 "blocks": 186240, 00:30:11.001 "percent": 97 00:30:11.001 } 00:30:11.001 }, 00:30:11.001 "base_bdevs_list": [ 00:30:11.001 { 00:30:11.001 "name": "spare", 00:30:11.001 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:11.001 "is_configured": true, 00:30:11.001 "data_offset": 2048, 00:30:11.001 "data_size": 63488 00:30:11.001 }, 00:30:11.001 { 00:30:11.001 "name": "BaseBdev2", 00:30:11.001 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:11.001 "is_configured": true, 00:30:11.001 "data_offset": 2048, 00:30:11.001 "data_size": 63488 00:30:11.001 }, 00:30:11.001 { 00:30:11.001 "name": "BaseBdev3", 00:30:11.001 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:11.001 "is_configured": true, 00:30:11.001 "data_offset": 2048, 00:30:11.001 "data_size": 63488 00:30:11.001 }, 00:30:11.001 { 00:30:11.001 "name": "BaseBdev4", 00:30:11.001 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:11.001 "is_configured": true, 00:30:11.001 "data_offset": 2048, 00:30:11.001 "data_size": 63488 00:30:11.001 } 00:30:11.001 ] 00:30:11.001 }' 00:30:11.001 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:11.001 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:11.001 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:11.259 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:11.259 11:41:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:11.259 [2024-07-25 11:41:27.053528] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:11.259 [2024-07-25 11:41:27.053688] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:11.259 [2024-07-25 11:41:27.053883] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:12.217 11:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:12.217 11:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:12.217 11:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:12.217 11:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:12.217 11:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:12.217 11:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:12.217 11:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.217 11:41:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:12.476 "name": "raid_bdev1", 00:30:12.476 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:12.476 "strip_size_kb": 64, 00:30:12.476 "state": "online", 00:30:12.476 "raid_level": "raid5f", 00:30:12.476 "superblock": true, 00:30:12.476 "num_base_bdevs": 4, 00:30:12.476 "num_base_bdevs_discovered": 4, 00:30:12.476 "num_base_bdevs_operational": 4, 00:30:12.476 "base_bdevs_list": [ 00:30:12.476 { 00:30:12.476 "name": "spare", 00:30:12.476 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:12.476 "is_configured": true, 00:30:12.476 "data_offset": 2048, 00:30:12.476 "data_size": 63488 00:30:12.476 }, 00:30:12.476 { 00:30:12.476 "name": "BaseBdev2", 00:30:12.476 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:12.476 "is_configured": true, 00:30:12.476 "data_offset": 2048, 00:30:12.476 "data_size": 63488 00:30:12.476 }, 00:30:12.476 { 00:30:12.476 "name": "BaseBdev3", 00:30:12.476 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:12.476 "is_configured": true, 00:30:12.476 "data_offset": 2048, 00:30:12.476 "data_size": 63488 00:30:12.476 }, 00:30:12.476 { 00:30:12.476 "name": "BaseBdev4", 00:30:12.476 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:12.476 "is_configured": true, 00:30:12.476 "data_offset": 2048, 00:30:12.476 "data_size": 63488 00:30:12.476 } 00:30:12.476 ] 00:30:12.476 }' 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.476 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.734 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:12.734 "name": "raid_bdev1", 00:30:12.734 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:12.734 "strip_size_kb": 64, 00:30:12.734 "state": "online", 00:30:12.734 "raid_level": "raid5f", 00:30:12.734 "superblock": true, 00:30:12.734 "num_base_bdevs": 4, 00:30:12.734 "num_base_bdevs_discovered": 4, 00:30:12.734 "num_base_bdevs_operational": 4, 00:30:12.734 "base_bdevs_list": [ 00:30:12.734 { 00:30:12.734 "name": "spare", 00:30:12.734 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:12.734 "is_configured": true, 00:30:12.734 "data_offset": 2048, 00:30:12.734 "data_size": 63488 00:30:12.734 }, 00:30:12.734 { 00:30:12.734 "name": "BaseBdev2", 00:30:12.734 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:12.734 "is_configured": true, 00:30:12.734 "data_offset": 2048, 00:30:12.734 "data_size": 63488 00:30:12.734 }, 00:30:12.734 { 00:30:12.734 "name": "BaseBdev3", 00:30:12.734 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:12.734 "is_configured": true, 00:30:12.734 "data_offset": 2048, 00:30:12.734 "data_size": 63488 00:30:12.734 }, 00:30:12.734 { 00:30:12.734 "name": "BaseBdev4", 00:30:12.734 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:12.734 "is_configured": true, 00:30:12.734 "data_offset": 2048, 00:30:12.734 "data_size": 63488 00:30:12.734 } 00:30:12.734 ] 00:30:12.734 }' 00:30:12.734 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.992 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.251 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:13.251 "name": "raid_bdev1", 00:30:13.251 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:13.251 "strip_size_kb": 64, 00:30:13.251 "state": "online", 00:30:13.251 "raid_level": "raid5f", 00:30:13.251 "superblock": true, 00:30:13.251 "num_base_bdevs": 4, 00:30:13.251 "num_base_bdevs_discovered": 4, 00:30:13.251 "num_base_bdevs_operational": 4, 00:30:13.251 "base_bdevs_list": [ 00:30:13.251 { 00:30:13.251 "name": "spare", 00:30:13.251 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:13.251 "is_configured": true, 00:30:13.251 "data_offset": 2048, 00:30:13.251 "data_size": 63488 00:30:13.251 }, 00:30:13.251 { 00:30:13.251 "name": "BaseBdev2", 00:30:13.251 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:13.251 "is_configured": true, 00:30:13.251 "data_offset": 2048, 00:30:13.251 "data_size": 63488 00:30:13.251 }, 00:30:13.251 { 00:30:13.251 "name": "BaseBdev3", 00:30:13.251 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:13.251 "is_configured": true, 00:30:13.251 "data_offset": 2048, 00:30:13.251 "data_size": 63488 00:30:13.251 }, 00:30:13.251 { 00:30:13.251 "name": "BaseBdev4", 00:30:13.251 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:13.251 "is_configured": true, 00:30:13.251 "data_offset": 2048, 00:30:13.251 "data_size": 63488 00:30:13.251 } 00:30:13.251 ] 00:30:13.251 }' 00:30:13.251 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:13.251 11:41:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.817 11:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:14.076 [2024-07-25 11:41:29.848608] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:14.076 [2024-07-25 11:41:29.848663] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:14.076 [2024-07-25 11:41:29.848812] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:14.076 [2024-07-25 11:41:29.848932] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:14.076 [2024-07-25 11:41:29.848954] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:14.076 11:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.076 11:41:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:30:14.334 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:30:14.334 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:30:14.334 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:30:14.334 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:14.335 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:14.335 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:14.335 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:14.335 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:14.335 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:14.335 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:14.335 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:14.335 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:14.335 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:14.593 /dev/nbd0 00:30:14.593 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:14.593 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:14.593 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:14.593 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:30:14.593 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:14.594 1+0 records in 00:30:14.594 1+0 records out 00:30:14.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054266 s, 7.5 MB/s 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:14.594 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:14.852 /dev/nbd1 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:14.852 1+0 records in 00:30:14.852 1+0 records out 00:30:14.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000699827 s, 5.9 MB/s 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:14.852 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:15.110 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:15.110 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:15.110 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:15.110 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:15.110 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:15.110 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:15.110 11:41:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:15.368 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:15.368 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:15.368 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:15.368 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:15.368 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:15.368 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:15.368 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:15.368 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:15.368 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:15.368 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:15.626 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:15.626 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:15.626 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:15.626 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:15.626 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:15.626 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:15.627 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:15.627 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:15.627 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:30:15.627 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:15.885 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:16.143 [2024-07-25 11:41:31.900032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:16.143 [2024-07-25 11:41:31.900138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.143 [2024-07-25 11:41:31.900173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:30:16.143 [2024-07-25 11:41:31.900200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.143 [2024-07-25 11:41:31.903332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.143 [2024-07-25 11:41:31.903387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:16.143 [2024-07-25 11:41:31.903519] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:16.143 [2024-07-25 11:41:31.903605] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:16.143 [2024-07-25 11:41:31.903848] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:16.143 [2024-07-25 11:41:31.904013] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:16.143 [2024-07-25 11:41:31.904179] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:16.143 spare 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.143 11:41:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.143 [2024-07-25 11:41:32.004334] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:30:16.143 [2024-07-25 11:41:32.004613] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:16.143 [2024-07-25 11:41:32.005102] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:30:16.143 [2024-07-25 11:41:32.011370] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:30:16.143 [2024-07-25 11:41:32.011551] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:30:16.143 [2024-07-25 11:41:32.012024] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:16.402 11:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:16.402 "name": "raid_bdev1", 00:30:16.402 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:16.402 "strip_size_kb": 64, 00:30:16.402 "state": "online", 00:30:16.402 "raid_level": "raid5f", 00:30:16.402 "superblock": true, 00:30:16.402 "num_base_bdevs": 4, 00:30:16.402 "num_base_bdevs_discovered": 4, 00:30:16.402 "num_base_bdevs_operational": 4, 00:30:16.402 "base_bdevs_list": [ 00:30:16.402 { 00:30:16.402 "name": "spare", 00:30:16.402 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:16.402 "is_configured": true, 00:30:16.402 "data_offset": 2048, 00:30:16.402 "data_size": 63488 00:30:16.402 }, 00:30:16.402 { 00:30:16.402 "name": "BaseBdev2", 00:30:16.402 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:16.402 "is_configured": true, 00:30:16.402 "data_offset": 2048, 00:30:16.402 "data_size": 63488 00:30:16.402 }, 00:30:16.402 { 00:30:16.402 "name": "BaseBdev3", 00:30:16.402 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:16.402 "is_configured": true, 00:30:16.402 "data_offset": 2048, 00:30:16.402 "data_size": 63488 00:30:16.402 }, 00:30:16.402 { 00:30:16.402 "name": "BaseBdev4", 00:30:16.402 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:16.402 "is_configured": true, 00:30:16.402 "data_offset": 2048, 00:30:16.402 "data_size": 63488 00:30:16.402 } 00:30:16.402 ] 00:30:16.402 }' 00:30:16.402 11:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:16.402 11:41:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.969 11:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:16.969 11:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:16.969 11:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:16.969 11:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:16.969 11:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:16.969 11:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.969 11:41:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.227 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:17.227 "name": "raid_bdev1", 00:30:17.227 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:17.227 "strip_size_kb": 64, 00:30:17.227 "state": "online", 00:30:17.227 "raid_level": "raid5f", 00:30:17.227 "superblock": true, 00:30:17.227 "num_base_bdevs": 4, 00:30:17.227 "num_base_bdevs_discovered": 4, 00:30:17.227 "num_base_bdevs_operational": 4, 00:30:17.227 "base_bdevs_list": [ 00:30:17.227 { 00:30:17.227 "name": "spare", 00:30:17.227 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:17.227 "is_configured": true, 00:30:17.227 "data_offset": 2048, 00:30:17.227 "data_size": 63488 00:30:17.227 }, 00:30:17.227 { 00:30:17.227 "name": "BaseBdev2", 00:30:17.227 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:17.227 "is_configured": true, 00:30:17.227 "data_offset": 2048, 00:30:17.227 "data_size": 63488 00:30:17.227 }, 00:30:17.227 { 00:30:17.227 "name": "BaseBdev3", 00:30:17.227 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:17.227 "is_configured": true, 00:30:17.227 "data_offset": 2048, 00:30:17.227 "data_size": 63488 00:30:17.227 }, 00:30:17.227 { 00:30:17.227 "name": "BaseBdev4", 00:30:17.227 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:17.227 "is_configured": true, 00:30:17.227 "data_offset": 2048, 00:30:17.227 "data_size": 63488 00:30:17.227 } 00:30:17.227 ] 00:30:17.227 }' 00:30:17.227 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:17.486 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:17.486 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:17.486 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:17.486 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:17.486 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.744 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:30:17.744 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:18.002 [2024-07-25 11:41:33.639824] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.002 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.260 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:18.260 "name": "raid_bdev1", 00:30:18.260 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:18.260 "strip_size_kb": 64, 00:30:18.260 "state": "online", 00:30:18.260 "raid_level": "raid5f", 00:30:18.260 "superblock": true, 00:30:18.260 "num_base_bdevs": 4, 00:30:18.260 "num_base_bdevs_discovered": 3, 00:30:18.260 "num_base_bdevs_operational": 3, 00:30:18.260 "base_bdevs_list": [ 00:30:18.260 { 00:30:18.260 "name": null, 00:30:18.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.260 "is_configured": false, 00:30:18.260 "data_offset": 2048, 00:30:18.260 "data_size": 63488 00:30:18.260 }, 00:30:18.260 { 00:30:18.260 "name": "BaseBdev2", 00:30:18.260 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:18.260 "is_configured": true, 00:30:18.260 "data_offset": 2048, 00:30:18.260 "data_size": 63488 00:30:18.260 }, 00:30:18.260 { 00:30:18.260 "name": "BaseBdev3", 00:30:18.260 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:18.260 "is_configured": true, 00:30:18.260 "data_offset": 2048, 00:30:18.260 "data_size": 63488 00:30:18.260 }, 00:30:18.260 { 00:30:18.260 "name": "BaseBdev4", 00:30:18.260 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:18.260 "is_configured": true, 00:30:18.260 "data_offset": 2048, 00:30:18.260 "data_size": 63488 00:30:18.260 } 00:30:18.260 ] 00:30:18.260 }' 00:30:18.260 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:18.260 11:41:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.826 11:41:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:19.084 [2024-07-25 11:41:34.914387] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:19.084 [2024-07-25 11:41:34.914668] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:19.084 [2024-07-25 11:41:34.914716] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:19.084 [2024-07-25 11:41:34.914787] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:19.084 [2024-07-25 11:41:34.926791] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:30:19.084 [2024-07-25 11:41:34.934872] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:19.084 11:41:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:30:20.499 11:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:20.499 11:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:20.499 11:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:20.499 11:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:20.499 11:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:20.499 11:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.499 11:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.499 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:20.499 "name": "raid_bdev1", 00:30:20.499 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:20.499 "strip_size_kb": 64, 00:30:20.499 "state": "online", 00:30:20.499 "raid_level": "raid5f", 00:30:20.499 "superblock": true, 00:30:20.499 "num_base_bdevs": 4, 00:30:20.499 "num_base_bdevs_discovered": 4, 00:30:20.499 "num_base_bdevs_operational": 4, 00:30:20.499 "process": { 00:30:20.499 "type": "rebuild", 00:30:20.499 "target": "spare", 00:30:20.499 "progress": { 00:30:20.499 "blocks": 23040, 00:30:20.499 "percent": 12 00:30:20.499 } 00:30:20.499 }, 00:30:20.499 "base_bdevs_list": [ 00:30:20.499 { 00:30:20.499 "name": "spare", 00:30:20.499 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:20.499 "is_configured": true, 00:30:20.499 "data_offset": 2048, 00:30:20.499 "data_size": 63488 00:30:20.499 }, 00:30:20.499 { 00:30:20.499 "name": "BaseBdev2", 00:30:20.499 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:20.499 "is_configured": true, 00:30:20.499 "data_offset": 2048, 00:30:20.499 "data_size": 63488 00:30:20.499 }, 00:30:20.499 { 00:30:20.499 "name": "BaseBdev3", 00:30:20.499 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:20.499 "is_configured": true, 00:30:20.499 "data_offset": 2048, 00:30:20.499 "data_size": 63488 00:30:20.499 }, 00:30:20.499 { 00:30:20.499 "name": "BaseBdev4", 00:30:20.499 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:20.499 "is_configured": true, 00:30:20.499 "data_offset": 2048, 00:30:20.499 "data_size": 63488 00:30:20.499 } 00:30:20.499 ] 00:30:20.499 }' 00:30:20.499 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:20.499 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:20.499 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:20.499 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:20.499 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:20.757 [2024-07-25 11:41:36.565678] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:21.015 [2024-07-25 11:41:36.652712] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:21.015 [2024-07-25 11:41:36.652807] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:21.015 [2024-07-25 11:41:36.652862] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:21.015 [2024-07-25 11:41:36.652889] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.016 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.273 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:21.273 "name": "raid_bdev1", 00:30:21.273 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:21.273 "strip_size_kb": 64, 00:30:21.273 "state": "online", 00:30:21.273 "raid_level": "raid5f", 00:30:21.273 "superblock": true, 00:30:21.273 "num_base_bdevs": 4, 00:30:21.273 "num_base_bdevs_discovered": 3, 00:30:21.273 "num_base_bdevs_operational": 3, 00:30:21.273 "base_bdevs_list": [ 00:30:21.273 { 00:30:21.273 "name": null, 00:30:21.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.273 "is_configured": false, 00:30:21.273 "data_offset": 2048, 00:30:21.273 "data_size": 63488 00:30:21.273 }, 00:30:21.273 { 00:30:21.273 "name": "BaseBdev2", 00:30:21.273 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:21.273 "is_configured": true, 00:30:21.273 "data_offset": 2048, 00:30:21.273 "data_size": 63488 00:30:21.273 }, 00:30:21.273 { 00:30:21.273 "name": "BaseBdev3", 00:30:21.273 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:21.273 "is_configured": true, 00:30:21.273 "data_offset": 2048, 00:30:21.273 "data_size": 63488 00:30:21.273 }, 00:30:21.273 { 00:30:21.273 "name": "BaseBdev4", 00:30:21.273 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:21.273 "is_configured": true, 00:30:21.273 "data_offset": 2048, 00:30:21.273 "data_size": 63488 00:30:21.273 } 00:30:21.273 ] 00:30:21.273 }' 00:30:21.273 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:21.273 11:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.838 11:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:22.093 [2024-07-25 11:41:37.870985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:22.094 [2024-07-25 11:41:37.871073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:22.094 [2024-07-25 11:41:37.871135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:30:22.094 [2024-07-25 11:41:37.871153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:22.094 [2024-07-25 11:41:37.871858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:22.094 [2024-07-25 11:41:37.871894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:22.094 [2024-07-25 11:41:37.872016] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:22.094 [2024-07-25 11:41:37.872041] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:22.094 [2024-07-25 11:41:37.872058] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:22.094 [2024-07-25 11:41:37.872088] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:22.094 [2024-07-25 11:41:37.885036] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:30:22.094 spare 00:30:22.094 [2024-07-25 11:41:37.893726] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:22.094 11:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:30:23.063 11:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:23.063 11:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:23.063 11:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:23.063 11:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:23.063 11:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:23.063 11:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.063 11:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.322 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:23.322 "name": "raid_bdev1", 00:30:23.322 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:23.322 "strip_size_kb": 64, 00:30:23.322 "state": "online", 00:30:23.322 "raid_level": "raid5f", 00:30:23.322 "superblock": true, 00:30:23.322 "num_base_bdevs": 4, 00:30:23.322 "num_base_bdevs_discovered": 4, 00:30:23.322 "num_base_bdevs_operational": 4, 00:30:23.322 "process": { 00:30:23.322 "type": "rebuild", 00:30:23.322 "target": "spare", 00:30:23.322 "progress": { 00:30:23.322 "blocks": 23040, 00:30:23.322 "percent": 12 00:30:23.322 } 00:30:23.322 }, 00:30:23.322 "base_bdevs_list": [ 00:30:23.322 { 00:30:23.322 "name": "spare", 00:30:23.322 "uuid": "17f91271-c9b6-5bfa-af19-0ba8a45b4a77", 00:30:23.322 "is_configured": true, 00:30:23.322 "data_offset": 2048, 00:30:23.322 "data_size": 63488 00:30:23.322 }, 00:30:23.322 { 00:30:23.322 "name": "BaseBdev2", 00:30:23.322 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:23.322 "is_configured": true, 00:30:23.322 "data_offset": 2048, 00:30:23.322 "data_size": 63488 00:30:23.322 }, 00:30:23.322 { 00:30:23.322 "name": "BaseBdev3", 00:30:23.322 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:23.322 "is_configured": true, 00:30:23.322 "data_offset": 2048, 00:30:23.322 "data_size": 63488 00:30:23.322 }, 00:30:23.322 { 00:30:23.322 "name": "BaseBdev4", 00:30:23.322 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:23.322 "is_configured": true, 00:30:23.322 "data_offset": 2048, 00:30:23.322 "data_size": 63488 00:30:23.322 } 00:30:23.322 ] 00:30:23.322 }' 00:30:23.322 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:23.580 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:23.580 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:23.580 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:23.580 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:23.838 [2024-07-25 11:41:39.540127] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:23.838 [2024-07-25 11:41:39.612012] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:23.838 [2024-07-25 11:41:39.612225] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:23.838 [2024-07-25 11:41:39.612253] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:23.838 [2024-07-25 11:41:39.612269] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.838 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.097 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:24.097 "name": "raid_bdev1", 00:30:24.097 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:24.097 "strip_size_kb": 64, 00:30:24.097 "state": "online", 00:30:24.097 "raid_level": "raid5f", 00:30:24.097 "superblock": true, 00:30:24.097 "num_base_bdevs": 4, 00:30:24.097 "num_base_bdevs_discovered": 3, 00:30:24.097 "num_base_bdevs_operational": 3, 00:30:24.097 "base_bdevs_list": [ 00:30:24.097 { 00:30:24.097 "name": null, 00:30:24.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.097 "is_configured": false, 00:30:24.097 "data_offset": 2048, 00:30:24.097 "data_size": 63488 00:30:24.097 }, 00:30:24.097 { 00:30:24.097 "name": "BaseBdev2", 00:30:24.097 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:24.097 "is_configured": true, 00:30:24.097 "data_offset": 2048, 00:30:24.097 "data_size": 63488 00:30:24.097 }, 00:30:24.097 { 00:30:24.097 "name": "BaseBdev3", 00:30:24.097 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:24.097 "is_configured": true, 00:30:24.097 "data_offset": 2048, 00:30:24.097 "data_size": 63488 00:30:24.097 }, 00:30:24.097 { 00:30:24.097 "name": "BaseBdev4", 00:30:24.097 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:24.097 "is_configured": true, 00:30:24.097 "data_offset": 2048, 00:30:24.097 "data_size": 63488 00:30:24.097 } 00:30:24.097 ] 00:30:24.097 }' 00:30:24.097 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:24.097 11:41:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:25.031 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:25.031 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:25.031 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:25.031 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:25.031 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:25.031 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.031 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.031 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:25.031 "name": "raid_bdev1", 00:30:25.031 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:25.031 "strip_size_kb": 64, 00:30:25.031 "state": "online", 00:30:25.031 "raid_level": "raid5f", 00:30:25.031 "superblock": true, 00:30:25.031 "num_base_bdevs": 4, 00:30:25.031 "num_base_bdevs_discovered": 3, 00:30:25.031 "num_base_bdevs_operational": 3, 00:30:25.031 "base_bdevs_list": [ 00:30:25.031 { 00:30:25.031 "name": null, 00:30:25.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.031 "is_configured": false, 00:30:25.031 "data_offset": 2048, 00:30:25.031 "data_size": 63488 00:30:25.031 }, 00:30:25.031 { 00:30:25.031 "name": "BaseBdev2", 00:30:25.031 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:25.031 "is_configured": true, 00:30:25.031 "data_offset": 2048, 00:30:25.031 "data_size": 63488 00:30:25.031 }, 00:30:25.031 { 00:30:25.031 "name": "BaseBdev3", 00:30:25.031 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:25.031 "is_configured": true, 00:30:25.031 "data_offset": 2048, 00:30:25.031 "data_size": 63488 00:30:25.031 }, 00:30:25.031 { 00:30:25.031 "name": "BaseBdev4", 00:30:25.031 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:25.031 "is_configured": true, 00:30:25.031 "data_offset": 2048, 00:30:25.031 "data_size": 63488 00:30:25.031 } 00:30:25.031 ] 00:30:25.031 }' 00:30:25.031 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:25.289 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:25.289 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:25.289 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:25.289 11:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:25.547 11:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:25.547 [2024-07-25 11:41:41.403449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:25.547 [2024-07-25 11:41:41.403560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:25.547 [2024-07-25 11:41:41.403594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:30:25.547 [2024-07-25 11:41:41.403614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:25.547 [2024-07-25 11:41:41.404225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:25.547 [2024-07-25 11:41:41.404263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:25.547 [2024-07-25 11:41:41.404367] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:25.547 [2024-07-25 11:41:41.404394] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:25.547 [2024-07-25 11:41:41.404407] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:25.547 BaseBdev1 00:30:25.547 11:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.924 "name": "raid_bdev1", 00:30:26.924 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:26.924 "strip_size_kb": 64, 00:30:26.924 "state": "online", 00:30:26.924 "raid_level": "raid5f", 00:30:26.924 "superblock": true, 00:30:26.924 "num_base_bdevs": 4, 00:30:26.924 "num_base_bdevs_discovered": 3, 00:30:26.924 "num_base_bdevs_operational": 3, 00:30:26.924 "base_bdevs_list": [ 00:30:26.924 { 00:30:26.924 "name": null, 00:30:26.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.924 "is_configured": false, 00:30:26.924 "data_offset": 2048, 00:30:26.924 "data_size": 63488 00:30:26.924 }, 00:30:26.924 { 00:30:26.924 "name": "BaseBdev2", 00:30:26.924 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:26.924 "is_configured": true, 00:30:26.924 "data_offset": 2048, 00:30:26.924 "data_size": 63488 00:30:26.924 }, 00:30:26.924 { 00:30:26.924 "name": "BaseBdev3", 00:30:26.924 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:26.924 "is_configured": true, 00:30:26.924 "data_offset": 2048, 00:30:26.924 "data_size": 63488 00:30:26.924 }, 00:30:26.924 { 00:30:26.924 "name": "BaseBdev4", 00:30:26.924 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:26.924 "is_configured": true, 00:30:26.924 "data_offset": 2048, 00:30:26.924 "data_size": 63488 00:30:26.924 } 00:30:26.924 ] 00:30:26.924 }' 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.924 11:41:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:27.859 "name": "raid_bdev1", 00:30:27.859 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:27.859 "strip_size_kb": 64, 00:30:27.859 "state": "online", 00:30:27.859 "raid_level": "raid5f", 00:30:27.859 "superblock": true, 00:30:27.859 "num_base_bdevs": 4, 00:30:27.859 "num_base_bdevs_discovered": 3, 00:30:27.859 "num_base_bdevs_operational": 3, 00:30:27.859 "base_bdevs_list": [ 00:30:27.859 { 00:30:27.859 "name": null, 00:30:27.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.859 "is_configured": false, 00:30:27.859 "data_offset": 2048, 00:30:27.859 "data_size": 63488 00:30:27.859 }, 00:30:27.859 { 00:30:27.859 "name": "BaseBdev2", 00:30:27.859 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:27.859 "is_configured": true, 00:30:27.859 "data_offset": 2048, 00:30:27.859 "data_size": 63488 00:30:27.859 }, 00:30:27.859 { 00:30:27.859 "name": "BaseBdev3", 00:30:27.859 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:27.859 "is_configured": true, 00:30:27.859 "data_offset": 2048, 00:30:27.859 "data_size": 63488 00:30:27.859 }, 00:30:27.859 { 00:30:27.859 "name": "BaseBdev4", 00:30:27.859 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:27.859 "is_configured": true, 00:30:27.859 "data_offset": 2048, 00:30:27.859 "data_size": 63488 00:30:27.859 } 00:30:27.859 ] 00:30:27.859 }' 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:27.859 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:28.117 11:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:28.117 [2024-07-25 11:41:43.996335] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:28.117 [2024-07-25 11:41:43.996550] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:28.117 [2024-07-25 11:41:43.996579] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:28.376 request: 00:30:28.376 { 00:30:28.376 "base_bdev": "BaseBdev1", 00:30:28.376 "raid_bdev": "raid_bdev1", 00:30:28.376 "method": "bdev_raid_add_base_bdev", 00:30:28.376 "req_id": 1 00:30:28.376 } 00:30:28.376 Got JSON-RPC error response 00:30:28.376 response: 00:30:28.376 { 00:30:28.376 "code": -22, 00:30:28.376 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:28.376 } 00:30:28.376 11:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:30:28.376 11:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:28.376 11:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:28.376 11:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:28.376 11:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.322 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.580 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:29.580 "name": "raid_bdev1", 00:30:29.580 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:29.580 "strip_size_kb": 64, 00:30:29.580 "state": "online", 00:30:29.580 "raid_level": "raid5f", 00:30:29.580 "superblock": true, 00:30:29.580 "num_base_bdevs": 4, 00:30:29.580 "num_base_bdevs_discovered": 3, 00:30:29.580 "num_base_bdevs_operational": 3, 00:30:29.580 "base_bdevs_list": [ 00:30:29.580 { 00:30:29.580 "name": null, 00:30:29.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.581 "is_configured": false, 00:30:29.581 "data_offset": 2048, 00:30:29.581 "data_size": 63488 00:30:29.581 }, 00:30:29.581 { 00:30:29.581 "name": "BaseBdev2", 00:30:29.581 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:29.581 "is_configured": true, 00:30:29.581 "data_offset": 2048, 00:30:29.581 "data_size": 63488 00:30:29.581 }, 00:30:29.581 { 00:30:29.581 "name": "BaseBdev3", 00:30:29.581 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:29.581 "is_configured": true, 00:30:29.581 "data_offset": 2048, 00:30:29.581 "data_size": 63488 00:30:29.581 }, 00:30:29.581 { 00:30:29.581 "name": "BaseBdev4", 00:30:29.581 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:29.581 "is_configured": true, 00:30:29.581 "data_offset": 2048, 00:30:29.581 "data_size": 63488 00:30:29.581 } 00:30:29.581 ] 00:30:29.581 }' 00:30:29.581 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:29.581 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:30.147 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:30.147 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:30.147 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:30.147 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:30.147 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:30.147 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.147 11:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.405 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:30.405 "name": "raid_bdev1", 00:30:30.405 "uuid": "c203fc7c-260c-4c36-928a-229586523172", 00:30:30.405 "strip_size_kb": 64, 00:30:30.405 "state": "online", 00:30:30.405 "raid_level": "raid5f", 00:30:30.405 "superblock": true, 00:30:30.405 "num_base_bdevs": 4, 00:30:30.405 "num_base_bdevs_discovered": 3, 00:30:30.405 "num_base_bdevs_operational": 3, 00:30:30.405 "base_bdevs_list": [ 00:30:30.405 { 00:30:30.405 "name": null, 00:30:30.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.405 "is_configured": false, 00:30:30.405 "data_offset": 2048, 00:30:30.405 "data_size": 63488 00:30:30.405 }, 00:30:30.405 { 00:30:30.405 "name": "BaseBdev2", 00:30:30.405 "uuid": "698ebdc0-68d0-5baa-87db-aea89dccfdfc", 00:30:30.405 "is_configured": true, 00:30:30.405 "data_offset": 2048, 00:30:30.405 "data_size": 63488 00:30:30.405 }, 00:30:30.405 { 00:30:30.405 "name": "BaseBdev3", 00:30:30.405 "uuid": "b218a05b-abc7-5cee-9408-2adfbd388a1c", 00:30:30.405 "is_configured": true, 00:30:30.405 "data_offset": 2048, 00:30:30.405 "data_size": 63488 00:30:30.405 }, 00:30:30.405 { 00:30:30.405 "name": "BaseBdev4", 00:30:30.405 "uuid": "d0d0f695-7278-5e29-832f-1e9d572e6b09", 00:30:30.405 "is_configured": true, 00:30:30.405 "data_offset": 2048, 00:30:30.405 "data_size": 63488 00:30:30.405 } 00:30:30.405 ] 00:30:30.405 }' 00:30:30.405 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:30.405 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:30.405 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 99000 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 99000 ']' 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 99000 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99000 00:30:30.664 killing process with pid 99000 00:30:30.664 Received shutdown signal, test time was about 60.000000 seconds 00:30:30.664 00:30:30.664 Latency(us) 00:30:30.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.664 =================================================================================================================== 00:30:30.664 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99000' 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 99000 00:30:30.664 11:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 99000 00:30:30.664 [2024-07-25 11:41:46.348117] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:30.664 [2024-07-25 11:41:46.348306] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:30.664 [2024-07-25 11:41:46.348403] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:30.664 [2024-07-25 11:41:46.348423] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:30:30.922 [2024-07-25 11:41:46.789474] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:32.296 11:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:30:32.296 00:30:32.296 real 0m42.463s 00:30:32.296 user 1m5.124s 00:30:32.296 sys 0m4.756s 00:30:32.296 11:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:32.296 11:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:32.296 ************************************ 00:30:32.296 END TEST raid5f_rebuild_test_sb 00:30:32.296 ************************************ 00:30:32.296 11:41:47 bdev_raid -- bdev/bdev_raid.sh@974 -- # base_blocklen=4096 00:30:32.296 11:41:47 bdev_raid -- bdev/bdev_raid.sh@976 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:30:32.296 11:41:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:32.296 11:41:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:32.296 11:41:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:32.296 ************************************ 00:30:32.296 START TEST raid_state_function_test_sb_4k 00:30:32.296 ************************************ 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:30:32.296 Process raid pid: 99957 00:30:32.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=99957 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 99957' 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 99957 /var/tmp/spdk-raid.sock 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 99957 ']' 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:32.296 11:41:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:32.296 [2024-07-25 11:41:48.121478] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:30:32.296 [2024-07-25 11:41:48.122022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.554 [2024-07-25 11:41:48.304566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.813 [2024-07-25 11:41:48.569805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.071 [2024-07-25 11:41:48.765947] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:33.071 [2024-07-25 11:41:48.766002] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:33.330 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:33.330 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:30:33.330 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:33.588 [2024-07-25 11:41:49.351414] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:33.588 [2024-07-25 11:41:49.351815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:33.588 [2024-07-25 11:41:49.351863] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:33.588 [2024-07-25 11:41:49.351892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.588 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:33.846 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:33.846 "name": "Existed_Raid", 00:30:33.846 "uuid": "ecc8768a-2059-40fa-83fa-81efa18ac2c2", 00:30:33.846 "strip_size_kb": 0, 00:30:33.846 "state": "configuring", 00:30:33.846 "raid_level": "raid1", 00:30:33.846 "superblock": true, 00:30:33.846 "num_base_bdevs": 2, 00:30:33.846 "num_base_bdevs_discovered": 0, 00:30:33.846 "num_base_bdevs_operational": 2, 00:30:33.846 "base_bdevs_list": [ 00:30:33.846 { 00:30:33.846 "name": "BaseBdev1", 00:30:33.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.846 "is_configured": false, 00:30:33.846 "data_offset": 0, 00:30:33.846 "data_size": 0 00:30:33.846 }, 00:30:33.846 { 00:30:33.846 "name": "BaseBdev2", 00:30:33.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.846 "is_configured": false, 00:30:33.846 "data_offset": 0, 00:30:33.846 "data_size": 0 00:30:33.846 } 00:30:33.846 ] 00:30:33.846 }' 00:30:33.846 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:33.846 11:41:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:34.780 11:41:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:34.780 [2024-07-25 11:41:50.603598] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:34.780 [2024-07-25 11:41:50.603643] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:34.780 11:41:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:35.038 [2024-07-25 11:41:50.899862] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:35.038 [2024-07-25 11:41:50.899928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:35.038 [2024-07-25 11:41:50.899954] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:35.038 [2024-07-25 11:41:50.899969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:35.038 11:41:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:30:35.605 [2024-07-25 11:41:51.217288] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:35.605 BaseBdev1 00:30:35.605 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:35.605 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:35.605 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:35.605 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:30:35.605 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:35.605 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:35.605 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:35.864 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:36.123 [ 00:30:36.123 { 00:30:36.123 "name": "BaseBdev1", 00:30:36.123 "aliases": [ 00:30:36.123 "4f486966-04ac-4a16-8f83-3e2f4f183715" 00:30:36.123 ], 00:30:36.123 "product_name": "Malloc disk", 00:30:36.123 "block_size": 4096, 00:30:36.123 "num_blocks": 8192, 00:30:36.123 "uuid": "4f486966-04ac-4a16-8f83-3e2f4f183715", 00:30:36.123 "assigned_rate_limits": { 00:30:36.123 "rw_ios_per_sec": 0, 00:30:36.123 "rw_mbytes_per_sec": 0, 00:30:36.123 "r_mbytes_per_sec": 0, 00:30:36.123 "w_mbytes_per_sec": 0 00:30:36.123 }, 00:30:36.123 "claimed": true, 00:30:36.123 "claim_type": "exclusive_write", 00:30:36.123 "zoned": false, 00:30:36.123 "supported_io_types": { 00:30:36.123 "read": true, 00:30:36.123 "write": true, 00:30:36.123 "unmap": true, 00:30:36.123 "flush": true, 00:30:36.123 "reset": true, 00:30:36.123 "nvme_admin": false, 00:30:36.123 "nvme_io": false, 00:30:36.123 "nvme_io_md": false, 00:30:36.123 "write_zeroes": true, 00:30:36.123 "zcopy": true, 00:30:36.123 "get_zone_info": false, 00:30:36.123 "zone_management": false, 00:30:36.123 "zone_append": false, 00:30:36.123 "compare": false, 00:30:36.123 "compare_and_write": false, 00:30:36.123 "abort": true, 00:30:36.123 "seek_hole": false, 00:30:36.123 "seek_data": false, 00:30:36.123 "copy": true, 00:30:36.123 "nvme_iov_md": false 00:30:36.123 }, 00:30:36.123 "memory_domains": [ 00:30:36.123 { 00:30:36.123 "dma_device_id": "system", 00:30:36.123 "dma_device_type": 1 00:30:36.123 }, 00:30:36.123 { 00:30:36.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:36.123 "dma_device_type": 2 00:30:36.123 } 00:30:36.123 ], 00:30:36.123 "driver_specific": {} 00:30:36.123 } 00:30:36.123 ] 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.123 11:41:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:36.382 11:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:36.382 "name": "Existed_Raid", 00:30:36.382 "uuid": "633782f9-7068-45c4-b97e-51e86a3e0646", 00:30:36.382 "strip_size_kb": 0, 00:30:36.382 "state": "configuring", 00:30:36.382 "raid_level": "raid1", 00:30:36.382 "superblock": true, 00:30:36.382 "num_base_bdevs": 2, 00:30:36.382 "num_base_bdevs_discovered": 1, 00:30:36.382 "num_base_bdevs_operational": 2, 00:30:36.382 "base_bdevs_list": [ 00:30:36.382 { 00:30:36.382 "name": "BaseBdev1", 00:30:36.382 "uuid": "4f486966-04ac-4a16-8f83-3e2f4f183715", 00:30:36.382 "is_configured": true, 00:30:36.382 "data_offset": 256, 00:30:36.382 "data_size": 7936 00:30:36.382 }, 00:30:36.382 { 00:30:36.382 "name": "BaseBdev2", 00:30:36.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.382 "is_configured": false, 00:30:36.382 "data_offset": 0, 00:30:36.382 "data_size": 0 00:30:36.382 } 00:30:36.382 ] 00:30:36.382 }' 00:30:36.382 11:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:36.382 11:41:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:36.949 11:41:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:37.206 [2024-07-25 11:41:53.002159] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:37.206 [2024-07-25 11:41:53.002492] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:37.206 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:37.463 [2024-07-25 11:41:53.290337] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:37.463 [2024-07-25 11:41:53.292913] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:37.463 [2024-07-25 11:41:53.292968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.463 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.028 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:38.028 "name": "Existed_Raid", 00:30:38.028 "uuid": "01932d32-48be-44f5-aff5-bc86de183a51", 00:30:38.028 "strip_size_kb": 0, 00:30:38.028 "state": "configuring", 00:30:38.028 "raid_level": "raid1", 00:30:38.028 "superblock": true, 00:30:38.028 "num_base_bdevs": 2, 00:30:38.028 "num_base_bdevs_discovered": 1, 00:30:38.028 "num_base_bdevs_operational": 2, 00:30:38.028 "base_bdevs_list": [ 00:30:38.028 { 00:30:38.028 "name": "BaseBdev1", 00:30:38.028 "uuid": "4f486966-04ac-4a16-8f83-3e2f4f183715", 00:30:38.028 "is_configured": true, 00:30:38.028 "data_offset": 256, 00:30:38.028 "data_size": 7936 00:30:38.028 }, 00:30:38.028 { 00:30:38.028 "name": "BaseBdev2", 00:30:38.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.028 "is_configured": false, 00:30:38.028 "data_offset": 0, 00:30:38.028 "data_size": 0 00:30:38.028 } 00:30:38.028 ] 00:30:38.028 }' 00:30:38.028 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:38.028 11:41:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:38.593 11:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:30:38.851 [2024-07-25 11:41:54.529425] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:38.851 [2024-07-25 11:41:54.529818] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:38.851 [2024-07-25 11:41:54.529859] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:38.851 [2024-07-25 11:41:54.530194] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:38.851 [2024-07-25 11:41:54.530408] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:38.851 [2024-07-25 11:41:54.530425] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:38.851 [2024-07-25 11:41:54.530656] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:38.851 BaseBdev2 00:30:38.851 11:41:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:38.851 11:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:38.851 11:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:38.851 11:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:30:38.851 11:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:38.851 11:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:38.851 11:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:39.110 11:41:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:39.424 [ 00:30:39.424 { 00:30:39.424 "name": "BaseBdev2", 00:30:39.424 "aliases": [ 00:30:39.424 "fbca8416-37c2-423c-86ba-bde44ddb4c54" 00:30:39.424 ], 00:30:39.424 "product_name": "Malloc disk", 00:30:39.424 "block_size": 4096, 00:30:39.424 "num_blocks": 8192, 00:30:39.424 "uuid": "fbca8416-37c2-423c-86ba-bde44ddb4c54", 00:30:39.424 "assigned_rate_limits": { 00:30:39.424 "rw_ios_per_sec": 0, 00:30:39.424 "rw_mbytes_per_sec": 0, 00:30:39.424 "r_mbytes_per_sec": 0, 00:30:39.424 "w_mbytes_per_sec": 0 00:30:39.424 }, 00:30:39.424 "claimed": true, 00:30:39.424 "claim_type": "exclusive_write", 00:30:39.424 "zoned": false, 00:30:39.424 "supported_io_types": { 00:30:39.424 "read": true, 00:30:39.424 "write": true, 00:30:39.424 "unmap": true, 00:30:39.424 "flush": true, 00:30:39.424 "reset": true, 00:30:39.424 "nvme_admin": false, 00:30:39.424 "nvme_io": false, 00:30:39.424 "nvme_io_md": false, 00:30:39.424 "write_zeroes": true, 00:30:39.424 "zcopy": true, 00:30:39.424 "get_zone_info": false, 00:30:39.424 "zone_management": false, 00:30:39.424 "zone_append": false, 00:30:39.424 "compare": false, 00:30:39.424 "compare_and_write": false, 00:30:39.424 "abort": true, 00:30:39.424 "seek_hole": false, 00:30:39.424 "seek_data": false, 00:30:39.424 "copy": true, 00:30:39.424 "nvme_iov_md": false 00:30:39.424 }, 00:30:39.424 "memory_domains": [ 00:30:39.424 { 00:30:39.424 "dma_device_id": "system", 00:30:39.424 "dma_device_type": 1 00:30:39.424 }, 00:30:39.424 { 00:30:39.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:39.424 "dma_device_type": 2 00:30:39.424 } 00:30:39.424 ], 00:30:39.424 "driver_specific": {} 00:30:39.424 } 00:30:39.424 ] 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.424 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.683 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:39.683 "name": "Existed_Raid", 00:30:39.683 "uuid": "01932d32-48be-44f5-aff5-bc86de183a51", 00:30:39.683 "strip_size_kb": 0, 00:30:39.683 "state": "online", 00:30:39.683 "raid_level": "raid1", 00:30:39.683 "superblock": true, 00:30:39.683 "num_base_bdevs": 2, 00:30:39.683 "num_base_bdevs_discovered": 2, 00:30:39.683 "num_base_bdevs_operational": 2, 00:30:39.683 "base_bdevs_list": [ 00:30:39.683 { 00:30:39.683 "name": "BaseBdev1", 00:30:39.683 "uuid": "4f486966-04ac-4a16-8f83-3e2f4f183715", 00:30:39.683 "is_configured": true, 00:30:39.683 "data_offset": 256, 00:30:39.683 "data_size": 7936 00:30:39.683 }, 00:30:39.683 { 00:30:39.683 "name": "BaseBdev2", 00:30:39.683 "uuid": "fbca8416-37c2-423c-86ba-bde44ddb4c54", 00:30:39.683 "is_configured": true, 00:30:39.683 "data_offset": 256, 00:30:39.683 "data_size": 7936 00:30:39.683 } 00:30:39.683 ] 00:30:39.683 }' 00:30:39.683 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:39.683 11:41:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:40.250 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:40.250 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:40.250 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:40.250 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:40.250 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:40.250 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:30:40.250 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:40.250 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:40.508 [2024-07-25 11:41:56.258410] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:40.508 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:40.508 "name": "Existed_Raid", 00:30:40.508 "aliases": [ 00:30:40.508 "01932d32-48be-44f5-aff5-bc86de183a51" 00:30:40.508 ], 00:30:40.508 "product_name": "Raid Volume", 00:30:40.508 "block_size": 4096, 00:30:40.508 "num_blocks": 7936, 00:30:40.508 "uuid": "01932d32-48be-44f5-aff5-bc86de183a51", 00:30:40.508 "assigned_rate_limits": { 00:30:40.508 "rw_ios_per_sec": 0, 00:30:40.508 "rw_mbytes_per_sec": 0, 00:30:40.508 "r_mbytes_per_sec": 0, 00:30:40.508 "w_mbytes_per_sec": 0 00:30:40.508 }, 00:30:40.508 "claimed": false, 00:30:40.508 "zoned": false, 00:30:40.508 "supported_io_types": { 00:30:40.508 "read": true, 00:30:40.508 "write": true, 00:30:40.508 "unmap": false, 00:30:40.508 "flush": false, 00:30:40.508 "reset": true, 00:30:40.508 "nvme_admin": false, 00:30:40.508 "nvme_io": false, 00:30:40.508 "nvme_io_md": false, 00:30:40.508 "write_zeroes": true, 00:30:40.508 "zcopy": false, 00:30:40.508 "get_zone_info": false, 00:30:40.508 "zone_management": false, 00:30:40.508 "zone_append": false, 00:30:40.508 "compare": false, 00:30:40.508 "compare_and_write": false, 00:30:40.508 "abort": false, 00:30:40.508 "seek_hole": false, 00:30:40.508 "seek_data": false, 00:30:40.508 "copy": false, 00:30:40.508 "nvme_iov_md": false 00:30:40.508 }, 00:30:40.508 "memory_domains": [ 00:30:40.508 { 00:30:40.508 "dma_device_id": "system", 00:30:40.508 "dma_device_type": 1 00:30:40.508 }, 00:30:40.508 { 00:30:40.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:40.508 "dma_device_type": 2 00:30:40.508 }, 00:30:40.508 { 00:30:40.508 "dma_device_id": "system", 00:30:40.508 "dma_device_type": 1 00:30:40.508 }, 00:30:40.508 { 00:30:40.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:40.508 "dma_device_type": 2 00:30:40.508 } 00:30:40.508 ], 00:30:40.508 "driver_specific": { 00:30:40.508 "raid": { 00:30:40.508 "uuid": "01932d32-48be-44f5-aff5-bc86de183a51", 00:30:40.508 "strip_size_kb": 0, 00:30:40.508 "state": "online", 00:30:40.508 "raid_level": "raid1", 00:30:40.508 "superblock": true, 00:30:40.508 "num_base_bdevs": 2, 00:30:40.508 "num_base_bdevs_discovered": 2, 00:30:40.508 "num_base_bdevs_operational": 2, 00:30:40.508 "base_bdevs_list": [ 00:30:40.508 { 00:30:40.508 "name": "BaseBdev1", 00:30:40.508 "uuid": "4f486966-04ac-4a16-8f83-3e2f4f183715", 00:30:40.508 "is_configured": true, 00:30:40.508 "data_offset": 256, 00:30:40.508 "data_size": 7936 00:30:40.508 }, 00:30:40.508 { 00:30:40.508 "name": "BaseBdev2", 00:30:40.508 "uuid": "fbca8416-37c2-423c-86ba-bde44ddb4c54", 00:30:40.508 "is_configured": true, 00:30:40.508 "data_offset": 256, 00:30:40.508 "data_size": 7936 00:30:40.508 } 00:30:40.508 ] 00:30:40.508 } 00:30:40.508 } 00:30:40.508 }' 00:30:40.508 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:40.508 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:40.508 BaseBdev2' 00:30:40.508 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:40.508 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:40.508 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:40.767 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:40.767 "name": "BaseBdev1", 00:30:40.767 "aliases": [ 00:30:40.767 "4f486966-04ac-4a16-8f83-3e2f4f183715" 00:30:40.767 ], 00:30:40.767 "product_name": "Malloc disk", 00:30:40.767 "block_size": 4096, 00:30:40.767 "num_blocks": 8192, 00:30:40.767 "uuid": "4f486966-04ac-4a16-8f83-3e2f4f183715", 00:30:40.767 "assigned_rate_limits": { 00:30:40.767 "rw_ios_per_sec": 0, 00:30:40.767 "rw_mbytes_per_sec": 0, 00:30:40.767 "r_mbytes_per_sec": 0, 00:30:40.767 "w_mbytes_per_sec": 0 00:30:40.767 }, 00:30:40.767 "claimed": true, 00:30:40.767 "claim_type": "exclusive_write", 00:30:40.767 "zoned": false, 00:30:40.767 "supported_io_types": { 00:30:40.767 "read": true, 00:30:40.767 "write": true, 00:30:40.767 "unmap": true, 00:30:40.767 "flush": true, 00:30:40.767 "reset": true, 00:30:40.767 "nvme_admin": false, 00:30:40.767 "nvme_io": false, 00:30:40.767 "nvme_io_md": false, 00:30:40.767 "write_zeroes": true, 00:30:40.767 "zcopy": true, 00:30:40.767 "get_zone_info": false, 00:30:40.767 "zone_management": false, 00:30:40.767 "zone_append": false, 00:30:40.767 "compare": false, 00:30:40.767 "compare_and_write": false, 00:30:40.767 "abort": true, 00:30:40.767 "seek_hole": false, 00:30:40.767 "seek_data": false, 00:30:40.767 "copy": true, 00:30:40.767 "nvme_iov_md": false 00:30:40.767 }, 00:30:40.767 "memory_domains": [ 00:30:40.767 { 00:30:40.767 "dma_device_id": "system", 00:30:40.767 "dma_device_type": 1 00:30:40.767 }, 00:30:40.767 { 00:30:40.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:40.767 "dma_device_type": 2 00:30:40.767 } 00:30:40.767 ], 00:30:40.767 "driver_specific": {} 00:30:40.767 }' 00:30:40.767 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:41.026 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:41.026 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:41.026 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:41.026 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:41.026 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:41.026 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:41.026 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:41.285 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:41.285 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:41.285 11:41:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:41.285 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:41.285 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:41.285 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:41.285 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:41.544 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:41.544 "name": "BaseBdev2", 00:30:41.544 "aliases": [ 00:30:41.544 "fbca8416-37c2-423c-86ba-bde44ddb4c54" 00:30:41.544 ], 00:30:41.544 "product_name": "Malloc disk", 00:30:41.544 "block_size": 4096, 00:30:41.544 "num_blocks": 8192, 00:30:41.544 "uuid": "fbca8416-37c2-423c-86ba-bde44ddb4c54", 00:30:41.544 "assigned_rate_limits": { 00:30:41.544 "rw_ios_per_sec": 0, 00:30:41.544 "rw_mbytes_per_sec": 0, 00:30:41.544 "r_mbytes_per_sec": 0, 00:30:41.544 "w_mbytes_per_sec": 0 00:30:41.544 }, 00:30:41.544 "claimed": true, 00:30:41.544 "claim_type": "exclusive_write", 00:30:41.544 "zoned": false, 00:30:41.544 "supported_io_types": { 00:30:41.544 "read": true, 00:30:41.544 "write": true, 00:30:41.544 "unmap": true, 00:30:41.544 "flush": true, 00:30:41.544 "reset": true, 00:30:41.544 "nvme_admin": false, 00:30:41.544 "nvme_io": false, 00:30:41.544 "nvme_io_md": false, 00:30:41.544 "write_zeroes": true, 00:30:41.544 "zcopy": true, 00:30:41.544 "get_zone_info": false, 00:30:41.544 "zone_management": false, 00:30:41.544 "zone_append": false, 00:30:41.544 "compare": false, 00:30:41.544 "compare_and_write": false, 00:30:41.544 "abort": true, 00:30:41.544 "seek_hole": false, 00:30:41.544 "seek_data": false, 00:30:41.544 "copy": true, 00:30:41.544 "nvme_iov_md": false 00:30:41.544 }, 00:30:41.544 "memory_domains": [ 00:30:41.544 { 00:30:41.544 "dma_device_id": "system", 00:30:41.544 "dma_device_type": 1 00:30:41.544 }, 00:30:41.544 { 00:30:41.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.544 "dma_device_type": 2 00:30:41.544 } 00:30:41.544 ], 00:30:41.544 "driver_specific": {} 00:30:41.544 }' 00:30:41.544 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:41.544 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:41.544 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:41.544 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:41.850 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:41.850 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:41.850 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:41.850 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:41.850 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:41.850 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:41.850 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:41.850 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:41.850 11:41:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:42.109 [2024-07-25 11:41:57.966867] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.368 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:42.627 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:42.627 "name": "Existed_Raid", 00:30:42.627 "uuid": "01932d32-48be-44f5-aff5-bc86de183a51", 00:30:42.627 "strip_size_kb": 0, 00:30:42.627 "state": "online", 00:30:42.627 "raid_level": "raid1", 00:30:42.627 "superblock": true, 00:30:42.627 "num_base_bdevs": 2, 00:30:42.627 "num_base_bdevs_discovered": 1, 00:30:42.627 "num_base_bdevs_operational": 1, 00:30:42.627 "base_bdevs_list": [ 00:30:42.627 { 00:30:42.627 "name": null, 00:30:42.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.627 "is_configured": false, 00:30:42.627 "data_offset": 256, 00:30:42.627 "data_size": 7936 00:30:42.627 }, 00:30:42.627 { 00:30:42.627 "name": "BaseBdev2", 00:30:42.627 "uuid": "fbca8416-37c2-423c-86ba-bde44ddb4c54", 00:30:42.627 "is_configured": true, 00:30:42.627 "data_offset": 256, 00:30:42.627 "data_size": 7936 00:30:42.627 } 00:30:42.627 ] 00:30:42.627 }' 00:30:42.627 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:42.627 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:43.193 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:43.193 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:43.193 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.193 11:41:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:43.455 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:43.455 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:43.455 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:43.714 [2024-07-25 11:41:59.470608] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:43.714 [2024-07-25 11:41:59.470793] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:43.714 [2024-07-25 11:41:59.553118] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:43.714 [2024-07-25 11:41:59.553215] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:43.714 [2024-07-25 11:41:59.553231] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:43.714 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:43.714 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:43.714 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.714 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 99957 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 99957 ']' 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 99957 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99957 00:30:43.973 killing process with pid 99957 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99957' 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 99957 00:30:43.973 [2024-07-25 11:41:59.849197] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:43.973 11:41:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 99957 00:30:44.232 [2024-07-25 11:41:59.865097] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:45.605 ************************************ 00:30:45.605 END TEST raid_state_function_test_sb_4k 00:30:45.605 ************************************ 00:30:45.605 11:42:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:30:45.605 00:30:45.605 real 0m13.034s 00:30:45.605 user 0m22.666s 00:30:45.605 sys 0m1.768s 00:30:45.605 11:42:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:45.605 11:42:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:45.605 11:42:01 bdev_raid -- bdev/bdev_raid.sh@977 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:30:45.605 11:42:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:45.605 11:42:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:45.605 11:42:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:45.605 ************************************ 00:30:45.605 START TEST raid_superblock_test_4k 00:30:45.605 ************************************ 00:30:45.605 11:42:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@414 -- # local strip_size 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@427 -- # raid_pid=100320 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@428 -- # waitforlisten 100320 /var/tmp/spdk-raid.sock 00:30:45.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 100320 ']' 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.606 11:42:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:45.606 [2024-07-25 11:42:01.212812] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:30:45.606 [2024-07-25 11:42:01.213018] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100320 ] 00:30:45.606 [2024-07-25 11:42:01.375447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.864 [2024-07-25 11:42:01.612648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.122 [2024-07-25 11:42:01.820741] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:46.122 [2024-07-25 11:42:01.820795] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:46.381 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:30:46.639 malloc1 00:30:46.639 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:46.897 [2024-07-25 11:42:02.574636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:46.897 [2024-07-25 11:42:02.574781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:46.897 [2024-07-25 11:42:02.574815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:46.897 [2024-07-25 11:42:02.574834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:46.897 [2024-07-25 11:42:02.577806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:46.897 [2024-07-25 11:42:02.577859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:46.897 pt1 00:30:46.897 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:30:46.897 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:46.897 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:30:46.897 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:30:46.897 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:46.897 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:46.897 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:30:46.897 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:46.897 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:30:47.155 malloc2 00:30:47.155 11:42:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:47.413 [2024-07-25 11:42:03.111140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:47.413 [2024-07-25 11:42:03.111258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.413 [2024-07-25 11:42:03.111291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:47.414 [2024-07-25 11:42:03.111313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.414 [2024-07-25 11:42:03.114180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.414 [2024-07-25 11:42:03.114248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:47.414 pt2 00:30:47.414 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:30:47.414 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:30:47.414 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:30:47.673 [2024-07-25 11:42:03.371313] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:47.674 [2024-07-25 11:42:03.373779] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:47.674 [2024-07-25 11:42:03.374018] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:47.674 [2024-07-25 11:42:03.374042] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:47.674 [2024-07-25 11:42:03.374362] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:47.674 [2024-07-25 11:42:03.374582] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:47.674 [2024-07-25 11:42:03.374598] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:47.674 [2024-07-25 11:42:03.374828] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.674 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.931 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:47.931 "name": "raid_bdev1", 00:30:47.931 "uuid": "74909914-bd33-4b26-98e8-73310b1f0732", 00:30:47.931 "strip_size_kb": 0, 00:30:47.931 "state": "online", 00:30:47.931 "raid_level": "raid1", 00:30:47.931 "superblock": true, 00:30:47.931 "num_base_bdevs": 2, 00:30:47.931 "num_base_bdevs_discovered": 2, 00:30:47.931 "num_base_bdevs_operational": 2, 00:30:47.931 "base_bdevs_list": [ 00:30:47.931 { 00:30:47.931 "name": "pt1", 00:30:47.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:47.931 "is_configured": true, 00:30:47.931 "data_offset": 256, 00:30:47.931 "data_size": 7936 00:30:47.931 }, 00:30:47.931 { 00:30:47.931 "name": "pt2", 00:30:47.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:47.931 "is_configured": true, 00:30:47.931 "data_offset": 256, 00:30:47.931 "data_size": 7936 00:30:47.931 } 00:30:47.931 ] 00:30:47.931 }' 00:30:47.931 11:42:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:47.931 11:42:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:48.498 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:30:48.498 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:30:48.498 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:48.498 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:48.498 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:48.498 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:30:48.498 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:48.498 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:48.756 [2024-07-25 11:42:04.555979] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:48.756 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:48.756 "name": "raid_bdev1", 00:30:48.756 "aliases": [ 00:30:48.756 "74909914-bd33-4b26-98e8-73310b1f0732" 00:30:48.756 ], 00:30:48.756 "product_name": "Raid Volume", 00:30:48.756 "block_size": 4096, 00:30:48.756 "num_blocks": 7936, 00:30:48.756 "uuid": "74909914-bd33-4b26-98e8-73310b1f0732", 00:30:48.756 "assigned_rate_limits": { 00:30:48.756 "rw_ios_per_sec": 0, 00:30:48.756 "rw_mbytes_per_sec": 0, 00:30:48.756 "r_mbytes_per_sec": 0, 00:30:48.756 "w_mbytes_per_sec": 0 00:30:48.756 }, 00:30:48.756 "claimed": false, 00:30:48.756 "zoned": false, 00:30:48.756 "supported_io_types": { 00:30:48.756 "read": true, 00:30:48.756 "write": true, 00:30:48.756 "unmap": false, 00:30:48.756 "flush": false, 00:30:48.756 "reset": true, 00:30:48.756 "nvme_admin": false, 00:30:48.756 "nvme_io": false, 00:30:48.756 "nvme_io_md": false, 00:30:48.756 "write_zeroes": true, 00:30:48.756 "zcopy": false, 00:30:48.756 "get_zone_info": false, 00:30:48.756 "zone_management": false, 00:30:48.756 "zone_append": false, 00:30:48.756 "compare": false, 00:30:48.756 "compare_and_write": false, 00:30:48.756 "abort": false, 00:30:48.756 "seek_hole": false, 00:30:48.756 "seek_data": false, 00:30:48.756 "copy": false, 00:30:48.756 "nvme_iov_md": false 00:30:48.756 }, 00:30:48.756 "memory_domains": [ 00:30:48.756 { 00:30:48.756 "dma_device_id": "system", 00:30:48.756 "dma_device_type": 1 00:30:48.756 }, 00:30:48.756 { 00:30:48.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.756 "dma_device_type": 2 00:30:48.756 }, 00:30:48.756 { 00:30:48.756 "dma_device_id": "system", 00:30:48.756 "dma_device_type": 1 00:30:48.756 }, 00:30:48.756 { 00:30:48.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.756 "dma_device_type": 2 00:30:48.756 } 00:30:48.756 ], 00:30:48.756 "driver_specific": { 00:30:48.756 "raid": { 00:30:48.756 "uuid": "74909914-bd33-4b26-98e8-73310b1f0732", 00:30:48.756 "strip_size_kb": 0, 00:30:48.756 "state": "online", 00:30:48.756 "raid_level": "raid1", 00:30:48.756 "superblock": true, 00:30:48.756 "num_base_bdevs": 2, 00:30:48.756 "num_base_bdevs_discovered": 2, 00:30:48.756 "num_base_bdevs_operational": 2, 00:30:48.756 "base_bdevs_list": [ 00:30:48.756 { 00:30:48.756 "name": "pt1", 00:30:48.756 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:48.756 "is_configured": true, 00:30:48.756 "data_offset": 256, 00:30:48.756 "data_size": 7936 00:30:48.756 }, 00:30:48.756 { 00:30:48.756 "name": "pt2", 00:30:48.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:48.756 "is_configured": true, 00:30:48.756 "data_offset": 256, 00:30:48.756 "data_size": 7936 00:30:48.756 } 00:30:48.756 ] 00:30:48.756 } 00:30:48.756 } 00:30:48.756 }' 00:30:48.756 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:48.756 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:30:48.756 pt2' 00:30:48.756 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:48.756 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:30:48.756 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:49.015 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:49.015 "name": "pt1", 00:30:49.015 "aliases": [ 00:30:49.015 "00000000-0000-0000-0000-000000000001" 00:30:49.015 ], 00:30:49.015 "product_name": "passthru", 00:30:49.015 "block_size": 4096, 00:30:49.015 "num_blocks": 8192, 00:30:49.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:49.015 "assigned_rate_limits": { 00:30:49.015 "rw_ios_per_sec": 0, 00:30:49.015 "rw_mbytes_per_sec": 0, 00:30:49.015 "r_mbytes_per_sec": 0, 00:30:49.015 "w_mbytes_per_sec": 0 00:30:49.015 }, 00:30:49.015 "claimed": true, 00:30:49.015 "claim_type": "exclusive_write", 00:30:49.015 "zoned": false, 00:30:49.015 "supported_io_types": { 00:30:49.015 "read": true, 00:30:49.015 "write": true, 00:30:49.015 "unmap": true, 00:30:49.015 "flush": true, 00:30:49.015 "reset": true, 00:30:49.015 "nvme_admin": false, 00:30:49.015 "nvme_io": false, 00:30:49.015 "nvme_io_md": false, 00:30:49.015 "write_zeroes": true, 00:30:49.015 "zcopy": true, 00:30:49.015 "get_zone_info": false, 00:30:49.015 "zone_management": false, 00:30:49.016 "zone_append": false, 00:30:49.016 "compare": false, 00:30:49.016 "compare_and_write": false, 00:30:49.016 "abort": true, 00:30:49.016 "seek_hole": false, 00:30:49.016 "seek_data": false, 00:30:49.016 "copy": true, 00:30:49.016 "nvme_iov_md": false 00:30:49.016 }, 00:30:49.016 "memory_domains": [ 00:30:49.016 { 00:30:49.016 "dma_device_id": "system", 00:30:49.016 "dma_device_type": 1 00:30:49.016 }, 00:30:49.016 { 00:30:49.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:49.016 "dma_device_type": 2 00:30:49.016 } 00:30:49.016 ], 00:30:49.016 "driver_specific": { 00:30:49.016 "passthru": { 00:30:49.016 "name": "pt1", 00:30:49.016 "base_bdev_name": "malloc1" 00:30:49.016 } 00:30:49.016 } 00:30:49.016 }' 00:30:49.016 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:49.274 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:49.274 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:49.274 11:42:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:49.274 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:49.274 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:49.274 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:49.274 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:49.532 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:49.532 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:49.532 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:49.532 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:49.532 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:49.532 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:30:49.532 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:49.790 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:49.790 "name": "pt2", 00:30:49.790 "aliases": [ 00:30:49.790 "00000000-0000-0000-0000-000000000002" 00:30:49.790 ], 00:30:49.790 "product_name": "passthru", 00:30:49.790 "block_size": 4096, 00:30:49.790 "num_blocks": 8192, 00:30:49.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:49.790 "assigned_rate_limits": { 00:30:49.791 "rw_ios_per_sec": 0, 00:30:49.791 "rw_mbytes_per_sec": 0, 00:30:49.791 "r_mbytes_per_sec": 0, 00:30:49.791 "w_mbytes_per_sec": 0 00:30:49.791 }, 00:30:49.791 "claimed": true, 00:30:49.791 "claim_type": "exclusive_write", 00:30:49.791 "zoned": false, 00:30:49.791 "supported_io_types": { 00:30:49.791 "read": true, 00:30:49.791 "write": true, 00:30:49.791 "unmap": true, 00:30:49.791 "flush": true, 00:30:49.791 "reset": true, 00:30:49.791 "nvme_admin": false, 00:30:49.791 "nvme_io": false, 00:30:49.791 "nvme_io_md": false, 00:30:49.791 "write_zeroes": true, 00:30:49.791 "zcopy": true, 00:30:49.791 "get_zone_info": false, 00:30:49.791 "zone_management": false, 00:30:49.791 "zone_append": false, 00:30:49.791 "compare": false, 00:30:49.791 "compare_and_write": false, 00:30:49.791 "abort": true, 00:30:49.791 "seek_hole": false, 00:30:49.791 "seek_data": false, 00:30:49.791 "copy": true, 00:30:49.791 "nvme_iov_md": false 00:30:49.791 }, 00:30:49.791 "memory_domains": [ 00:30:49.791 { 00:30:49.791 "dma_device_id": "system", 00:30:49.791 "dma_device_type": 1 00:30:49.791 }, 00:30:49.791 { 00:30:49.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:49.791 "dma_device_type": 2 00:30:49.791 } 00:30:49.791 ], 00:30:49.791 "driver_specific": { 00:30:49.791 "passthru": { 00:30:49.791 "name": "pt2", 00:30:49.791 "base_bdev_name": "malloc2" 00:30:49.791 } 00:30:49.791 } 00:30:49.791 }' 00:30:49.791 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:49.791 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:49.791 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:49.791 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:49.791 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:50.053 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:50.053 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:50.053 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:50.053 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:50.053 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:50.053 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:50.053 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:50.053 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:50.053 11:42:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:30:50.312 [2024-07-25 11:42:06.188561] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:50.570 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=74909914-bd33-4b26-98e8-73310b1f0732 00:30:50.570 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' -z 74909914-bd33-4b26-98e8-73310b1f0732 ']' 00:30:50.570 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:50.570 [2024-07-25 11:42:06.428249] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:50.570 [2024-07-25 11:42:06.428296] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:50.570 [2024-07-25 11:42:06.428399] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:50.570 [2024-07-25 11:42:06.428486] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:50.570 [2024-07-25 11:42:06.428515] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:50.570 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:30:50.570 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.137 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:30:51.137 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:30:51.137 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:30:51.137 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:51.137 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:30:51.137 11:42:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:51.395 11:42:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:30:51.395 11:42:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:51.654 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:30:51.912 [2024-07-25 11:42:07.692648] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:51.912 [2024-07-25 11:42:07.695361] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:51.912 [2024-07-25 11:42:07.695615] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:51.912 [2024-07-25 11:42:07.695846] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:51.912 [2024-07-25 11:42:07.696040] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:51.913 [2024-07-25 11:42:07.696209] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:30:51.913 request: 00:30:51.913 { 00:30:51.913 "name": "raid_bdev1", 00:30:51.913 "raid_level": "raid1", 00:30:51.913 "base_bdevs": [ 00:30:51.913 "malloc1", 00:30:51.913 "malloc2" 00:30:51.913 ], 00:30:51.913 "superblock": false, 00:30:51.913 "method": "bdev_raid_create", 00:30:51.913 "req_id": 1 00:30:51.913 } 00:30:51.913 Got JSON-RPC error response 00:30:51.913 response: 00:30:51.913 { 00:30:51.913 "code": -17, 00:30:51.913 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:51.913 } 00:30:51.913 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:30:51.913 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:51.913 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:51.913 11:42:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:51.913 11:42:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.913 11:42:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:30:52.171 11:42:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:30:52.171 11:42:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:30:52.171 11:42:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:52.430 [2024-07-25 11:42:08.216833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:52.430 [2024-07-25 11:42:08.216987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.430 [2024-07-25 11:42:08.217018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:52.430 [2024-07-25 11:42:08.217041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.430 [2024-07-25 11:42:08.219983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.430 [2024-07-25 11:42:08.220029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:52.430 [2024-07-25 11:42:08.220181] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:52.430 [2024-07-25 11:42:08.220257] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:52.430 pt1 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.430 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.688 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:52.688 "name": "raid_bdev1", 00:30:52.688 "uuid": "74909914-bd33-4b26-98e8-73310b1f0732", 00:30:52.688 "strip_size_kb": 0, 00:30:52.688 "state": "configuring", 00:30:52.688 "raid_level": "raid1", 00:30:52.688 "superblock": true, 00:30:52.688 "num_base_bdevs": 2, 00:30:52.688 "num_base_bdevs_discovered": 1, 00:30:52.688 "num_base_bdevs_operational": 2, 00:30:52.688 "base_bdevs_list": [ 00:30:52.688 { 00:30:52.688 "name": "pt1", 00:30:52.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:52.688 "is_configured": true, 00:30:52.688 "data_offset": 256, 00:30:52.688 "data_size": 7936 00:30:52.688 }, 00:30:52.688 { 00:30:52.688 "name": null, 00:30:52.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:52.688 "is_configured": false, 00:30:52.688 "data_offset": 256, 00:30:52.688 "data_size": 7936 00:30:52.688 } 00:30:52.688 ] 00:30:52.688 }' 00:30:52.688 11:42:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:52.688 11:42:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:53.625 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:30:53.625 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:30:53.625 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:30:53.625 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:53.625 [2024-07-25 11:42:09.449141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:53.625 [2024-07-25 11:42:09.449510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:53.625 [2024-07-25 11:42:09.449595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:30:53.625 [2024-07-25 11:42:09.449849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:53.625 [2024-07-25 11:42:09.450495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:53.625 [2024-07-25 11:42:09.450545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:53.625 [2024-07-25 11:42:09.450682] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:53.625 [2024-07-25 11:42:09.450728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:53.625 [2024-07-25 11:42:09.450918] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:53.625 [2024-07-25 11:42:09.450951] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:53.625 [2024-07-25 11:42:09.451258] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:53.625 [2024-07-25 11:42:09.451464] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:53.625 [2024-07-25 11:42:09.451502] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:30:53.626 [2024-07-25 11:42:09.451704] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:53.626 pt2 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.626 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.884 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:53.884 "name": "raid_bdev1", 00:30:53.884 "uuid": "74909914-bd33-4b26-98e8-73310b1f0732", 00:30:53.884 "strip_size_kb": 0, 00:30:53.884 "state": "online", 00:30:53.884 "raid_level": "raid1", 00:30:53.884 "superblock": true, 00:30:53.884 "num_base_bdevs": 2, 00:30:53.884 "num_base_bdevs_discovered": 2, 00:30:53.884 "num_base_bdevs_operational": 2, 00:30:53.884 "base_bdevs_list": [ 00:30:53.884 { 00:30:53.884 "name": "pt1", 00:30:53.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:53.884 "is_configured": true, 00:30:53.884 "data_offset": 256, 00:30:53.884 "data_size": 7936 00:30:53.884 }, 00:30:53.884 { 00:30:53.884 "name": "pt2", 00:30:53.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:53.884 "is_configured": true, 00:30:53.884 "data_offset": 256, 00:30:53.884 "data_size": 7936 00:30:53.884 } 00:30:53.884 ] 00:30:53.884 }' 00:30:53.884 11:42:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:53.884 11:42:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:54.819 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:30:54.819 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:30:54.819 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:54.819 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:54.819 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:54.819 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:30:54.819 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:54.819 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:54.819 [2024-07-25 11:42:10.565348] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:54.819 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:54.819 "name": "raid_bdev1", 00:30:54.819 "aliases": [ 00:30:54.819 "74909914-bd33-4b26-98e8-73310b1f0732" 00:30:54.819 ], 00:30:54.819 "product_name": "Raid Volume", 00:30:54.819 "block_size": 4096, 00:30:54.819 "num_blocks": 7936, 00:30:54.819 "uuid": "74909914-bd33-4b26-98e8-73310b1f0732", 00:30:54.819 "assigned_rate_limits": { 00:30:54.819 "rw_ios_per_sec": 0, 00:30:54.819 "rw_mbytes_per_sec": 0, 00:30:54.819 "r_mbytes_per_sec": 0, 00:30:54.819 "w_mbytes_per_sec": 0 00:30:54.819 }, 00:30:54.819 "claimed": false, 00:30:54.819 "zoned": false, 00:30:54.819 "supported_io_types": { 00:30:54.819 "read": true, 00:30:54.819 "write": true, 00:30:54.819 "unmap": false, 00:30:54.819 "flush": false, 00:30:54.819 "reset": true, 00:30:54.819 "nvme_admin": false, 00:30:54.819 "nvme_io": false, 00:30:54.819 "nvme_io_md": false, 00:30:54.819 "write_zeroes": true, 00:30:54.819 "zcopy": false, 00:30:54.819 "get_zone_info": false, 00:30:54.819 "zone_management": false, 00:30:54.819 "zone_append": false, 00:30:54.819 "compare": false, 00:30:54.819 "compare_and_write": false, 00:30:54.819 "abort": false, 00:30:54.819 "seek_hole": false, 00:30:54.819 "seek_data": false, 00:30:54.819 "copy": false, 00:30:54.819 "nvme_iov_md": false 00:30:54.819 }, 00:30:54.819 "memory_domains": [ 00:30:54.819 { 00:30:54.819 "dma_device_id": "system", 00:30:54.819 "dma_device_type": 1 00:30:54.819 }, 00:30:54.819 { 00:30:54.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.819 "dma_device_type": 2 00:30:54.819 }, 00:30:54.819 { 00:30:54.819 "dma_device_id": "system", 00:30:54.819 "dma_device_type": 1 00:30:54.819 }, 00:30:54.819 { 00:30:54.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.819 "dma_device_type": 2 00:30:54.819 } 00:30:54.819 ], 00:30:54.819 "driver_specific": { 00:30:54.819 "raid": { 00:30:54.819 "uuid": "74909914-bd33-4b26-98e8-73310b1f0732", 00:30:54.819 "strip_size_kb": 0, 00:30:54.819 "state": "online", 00:30:54.819 "raid_level": "raid1", 00:30:54.819 "superblock": true, 00:30:54.819 "num_base_bdevs": 2, 00:30:54.819 "num_base_bdevs_discovered": 2, 00:30:54.819 "num_base_bdevs_operational": 2, 00:30:54.819 "base_bdevs_list": [ 00:30:54.819 { 00:30:54.820 "name": "pt1", 00:30:54.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:54.820 "is_configured": true, 00:30:54.820 "data_offset": 256, 00:30:54.820 "data_size": 7936 00:30:54.820 }, 00:30:54.820 { 00:30:54.820 "name": "pt2", 00:30:54.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:54.820 "is_configured": true, 00:30:54.820 "data_offset": 256, 00:30:54.820 "data_size": 7936 00:30:54.820 } 00:30:54.820 ] 00:30:54.820 } 00:30:54.820 } 00:30:54.820 }' 00:30:54.820 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:54.820 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:30:54.820 pt2' 00:30:54.820 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:54.820 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:30:54.820 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:55.078 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:55.078 "name": "pt1", 00:30:55.078 "aliases": [ 00:30:55.078 "00000000-0000-0000-0000-000000000001" 00:30:55.078 ], 00:30:55.078 "product_name": "passthru", 00:30:55.078 "block_size": 4096, 00:30:55.078 "num_blocks": 8192, 00:30:55.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:55.078 "assigned_rate_limits": { 00:30:55.078 "rw_ios_per_sec": 0, 00:30:55.078 "rw_mbytes_per_sec": 0, 00:30:55.078 "r_mbytes_per_sec": 0, 00:30:55.078 "w_mbytes_per_sec": 0 00:30:55.078 }, 00:30:55.078 "claimed": true, 00:30:55.078 "claim_type": "exclusive_write", 00:30:55.078 "zoned": false, 00:30:55.078 "supported_io_types": { 00:30:55.078 "read": true, 00:30:55.078 "write": true, 00:30:55.078 "unmap": true, 00:30:55.078 "flush": true, 00:30:55.078 "reset": true, 00:30:55.078 "nvme_admin": false, 00:30:55.078 "nvme_io": false, 00:30:55.078 "nvme_io_md": false, 00:30:55.078 "write_zeroes": true, 00:30:55.078 "zcopy": true, 00:30:55.078 "get_zone_info": false, 00:30:55.078 "zone_management": false, 00:30:55.078 "zone_append": false, 00:30:55.078 "compare": false, 00:30:55.078 "compare_and_write": false, 00:30:55.078 "abort": true, 00:30:55.078 "seek_hole": false, 00:30:55.078 "seek_data": false, 00:30:55.078 "copy": true, 00:30:55.078 "nvme_iov_md": false 00:30:55.078 }, 00:30:55.078 "memory_domains": [ 00:30:55.078 { 00:30:55.078 "dma_device_id": "system", 00:30:55.078 "dma_device_type": 1 00:30:55.078 }, 00:30:55.078 { 00:30:55.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:55.078 "dma_device_type": 2 00:30:55.078 } 00:30:55.078 ], 00:30:55.078 "driver_specific": { 00:30:55.078 "passthru": { 00:30:55.078 "name": "pt1", 00:30:55.078 "base_bdev_name": "malloc1" 00:30:55.078 } 00:30:55.078 } 00:30:55.078 }' 00:30:55.078 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:55.429 11:42:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:55.429 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:55.429 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:55.430 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:55.430 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:55.430 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:55.430 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:55.430 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:55.430 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:55.719 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:55.719 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:55.719 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:55.719 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:30:55.719 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:55.719 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:55.719 "name": "pt2", 00:30:55.719 "aliases": [ 00:30:55.719 "00000000-0000-0000-0000-000000000002" 00:30:55.719 ], 00:30:55.719 "product_name": "passthru", 00:30:55.719 "block_size": 4096, 00:30:55.719 "num_blocks": 8192, 00:30:55.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:55.719 "assigned_rate_limits": { 00:30:55.719 "rw_ios_per_sec": 0, 00:30:55.719 "rw_mbytes_per_sec": 0, 00:30:55.719 "r_mbytes_per_sec": 0, 00:30:55.719 "w_mbytes_per_sec": 0 00:30:55.719 }, 00:30:55.719 "claimed": true, 00:30:55.719 "claim_type": "exclusive_write", 00:30:55.719 "zoned": false, 00:30:55.719 "supported_io_types": { 00:30:55.719 "read": true, 00:30:55.719 "write": true, 00:30:55.719 "unmap": true, 00:30:55.719 "flush": true, 00:30:55.719 "reset": true, 00:30:55.719 "nvme_admin": false, 00:30:55.719 "nvme_io": false, 00:30:55.719 "nvme_io_md": false, 00:30:55.719 "write_zeroes": true, 00:30:55.719 "zcopy": true, 00:30:55.719 "get_zone_info": false, 00:30:55.719 "zone_management": false, 00:30:55.719 "zone_append": false, 00:30:55.719 "compare": false, 00:30:55.719 "compare_and_write": false, 00:30:55.719 "abort": true, 00:30:55.719 "seek_hole": false, 00:30:55.719 "seek_data": false, 00:30:55.719 "copy": true, 00:30:55.719 "nvme_iov_md": false 00:30:55.719 }, 00:30:55.719 "memory_domains": [ 00:30:55.719 { 00:30:55.719 "dma_device_id": "system", 00:30:55.719 "dma_device_type": 1 00:30:55.719 }, 00:30:55.719 { 00:30:55.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:55.719 "dma_device_type": 2 00:30:55.719 } 00:30:55.719 ], 00:30:55.719 "driver_specific": { 00:30:55.719 "passthru": { 00:30:55.719 "name": "pt2", 00:30:55.719 "base_bdev_name": "malloc2" 00:30:55.719 } 00:30:55.719 } 00:30:55.719 }' 00:30:55.719 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:55.979 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:55.979 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:30:55.979 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:55.979 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:55.979 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:55.979 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:55.979 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:56.237 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:56.237 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:56.237 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:56.237 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:56.237 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:56.237 11:42:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:30:56.496 [2024-07-25 11:42:12.237614] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:56.496 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # '[' 74909914-bd33-4b26-98e8-73310b1f0732 '!=' 74909914-bd33-4b26-98e8-73310b1f0732 ']' 00:30:56.496 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:30:56.496 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:56.496 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:30:56.496 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:30:56.755 [2024-07-25 11:42:12.485436] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.755 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.014 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:57.014 "name": "raid_bdev1", 00:30:57.014 "uuid": "74909914-bd33-4b26-98e8-73310b1f0732", 00:30:57.014 "strip_size_kb": 0, 00:30:57.014 "state": "online", 00:30:57.014 "raid_level": "raid1", 00:30:57.014 "superblock": true, 00:30:57.014 "num_base_bdevs": 2, 00:30:57.014 "num_base_bdevs_discovered": 1, 00:30:57.014 "num_base_bdevs_operational": 1, 00:30:57.014 "base_bdevs_list": [ 00:30:57.014 { 00:30:57.014 "name": null, 00:30:57.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.014 "is_configured": false, 00:30:57.014 "data_offset": 256, 00:30:57.014 "data_size": 7936 00:30:57.014 }, 00:30:57.014 { 00:30:57.014 "name": "pt2", 00:30:57.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:57.014 "is_configured": true, 00:30:57.014 "data_offset": 256, 00:30:57.014 "data_size": 7936 00:30:57.014 } 00:30:57.014 ] 00:30:57.014 }' 00:30:57.014 11:42:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:57.014 11:42:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:57.954 11:42:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:57.954 [2024-07-25 11:42:13.713746] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:57.954 [2024-07-25 11:42:13.713789] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:57.954 [2024-07-25 11:42:13.713879] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:57.954 [2024-07-25 11:42:13.713962] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:57.954 [2024-07-25 11:42:13.713978] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:30:57.954 11:42:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.954 11:42:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:30:58.218 11:42:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:30:58.218 11:42:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:30:58.218 11:42:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:30:58.218 11:42:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:30:58.218 11:42:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:30:58.495 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:30:58.495 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:30:58.495 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:30:58.495 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:30:58.495 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@534 -- # i=1 00:30:58.495 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:58.753 [2024-07-25 11:42:14.449864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:58.753 [2024-07-25 11:42:14.449975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.753 [2024-07-25 11:42:14.450008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:58.753 [2024-07-25 11:42:14.450022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.753 [2024-07-25 11:42:14.452783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.753 [2024-07-25 11:42:14.452843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:58.753 [2024-07-25 11:42:14.452954] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:58.753 [2024-07-25 11:42:14.453038] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:58.753 [2024-07-25 11:42:14.453195] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:58.753 [2024-07-25 11:42:14.453211] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:58.753 pt2 00:30:58.753 [2024-07-25 11:42:14.453561] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:58.753 [2024-07-25 11:42:14.453797] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:58.753 [2024-07-25 11:42:14.453821] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:30:58.754 [2024-07-25 11:42:14.454086] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.754 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.012 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:59.012 "name": "raid_bdev1", 00:30:59.012 "uuid": "74909914-bd33-4b26-98e8-73310b1f0732", 00:30:59.012 "strip_size_kb": 0, 00:30:59.012 "state": "online", 00:30:59.012 "raid_level": "raid1", 00:30:59.012 "superblock": true, 00:30:59.012 "num_base_bdevs": 2, 00:30:59.012 "num_base_bdevs_discovered": 1, 00:30:59.012 "num_base_bdevs_operational": 1, 00:30:59.012 "base_bdevs_list": [ 00:30:59.012 { 00:30:59.012 "name": null, 00:30:59.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.012 "is_configured": false, 00:30:59.012 "data_offset": 256, 00:30:59.012 "data_size": 7936 00:30:59.012 }, 00:30:59.012 { 00:30:59.012 "name": "pt2", 00:30:59.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:59.012 "is_configured": true, 00:30:59.012 "data_offset": 256, 00:30:59.012 "data_size": 7936 00:30:59.012 } 00:30:59.012 ] 00:30:59.012 }' 00:30:59.012 11:42:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:59.012 11:42:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:30:59.578 11:42:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:59.836 [2024-07-25 11:42:15.602351] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:59.836 [2024-07-25 11:42:15.602387] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:59.836 [2024-07-25 11:42:15.602481] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:59.836 [2024-07-25 11:42:15.602544] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:59.836 [2024-07-25 11:42:15.602563] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:30:59.836 11:42:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.836 11:42:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:31:00.094 11:42:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:31:00.094 11:42:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:31:00.094 11:42:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:31:00.094 11:42:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:00.352 [2024-07-25 11:42:16.146456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:00.352 [2024-07-25 11:42:16.146549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:00.352 [2024-07-25 11:42:16.146577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:31:00.352 [2024-07-25 11:42:16.146595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:00.352 [2024-07-25 11:42:16.149397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:00.352 [2024-07-25 11:42:16.149450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:00.352 [2024-07-25 11:42:16.149558] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:00.352 [2024-07-25 11:42:16.149652] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:00.352 [2024-07-25 11:42:16.149841] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:00.352 [2024-07-25 11:42:16.149863] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:00.353 [2024-07-25 11:42:16.149886] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:31:00.353 [2024-07-25 11:42:16.149980] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:00.353 [2024-07-25 11:42:16.150089] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:31:00.353 [2024-07-25 11:42:16.150111] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:00.353 [2024-07-25 11:42:16.150439] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:00.353 [2024-07-25 11:42:16.150645] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:31:00.353 [2024-07-25 11:42:16.150661] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:31:00.353 pt1 00:31:00.353 [2024-07-25 11:42:16.150901] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.353 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.610 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:00.610 "name": "raid_bdev1", 00:31:00.610 "uuid": "74909914-bd33-4b26-98e8-73310b1f0732", 00:31:00.610 "strip_size_kb": 0, 00:31:00.610 "state": "online", 00:31:00.610 "raid_level": "raid1", 00:31:00.610 "superblock": true, 00:31:00.610 "num_base_bdevs": 2, 00:31:00.610 "num_base_bdevs_discovered": 1, 00:31:00.610 "num_base_bdevs_operational": 1, 00:31:00.610 "base_bdevs_list": [ 00:31:00.610 { 00:31:00.610 "name": null, 00:31:00.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.610 "is_configured": false, 00:31:00.610 "data_offset": 256, 00:31:00.610 "data_size": 7936 00:31:00.610 }, 00:31:00.610 { 00:31:00.610 "name": "pt2", 00:31:00.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:00.610 "is_configured": true, 00:31:00.610 "data_offset": 256, 00:31:00.610 "data_size": 7936 00:31:00.610 } 00:31:00.610 ] 00:31:00.610 }' 00:31:00.610 11:42:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:00.610 11:42:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:01.542 11:42:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:01.542 11:42:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:01.542 11:42:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:31:01.542 11:42:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:01.542 11:42:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:31:01.836 [2024-07-25 11:42:17.619489] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # '[' 74909914-bd33-4b26-98e8-73310b1f0732 '!=' 74909914-bd33-4b26-98e8-73310b1f0732 ']' 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@578 -- # killprocess 100320 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 100320 ']' 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 100320 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100320 00:31:01.836 killing process with pid 100320 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100320' 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 100320 00:31:01.836 [2024-07-25 11:42:17.665922] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:01.836 11:42:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 100320 00:31:01.836 [2024-07-25 11:42:17.666018] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:01.836 [2024-07-25 11:42:17.666099] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:01.836 [2024-07-25 11:42:17.666113] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:31:02.093 [2024-07-25 11:42:17.845382] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:03.466 ************************************ 00:31:03.466 END TEST raid_superblock_test_4k 00:31:03.466 ************************************ 00:31:03.466 11:42:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@580 -- # return 0 00:31:03.466 00:31:03.466 real 0m17.902s 00:31:03.466 user 0m32.271s 00:31:03.466 sys 0m2.330s 00:31:03.466 11:42:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.466 11:42:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:03.466 11:42:19 bdev_raid -- bdev/bdev_raid.sh@978 -- # '[' true = true ']' 00:31:03.466 11:42:19 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:31:03.466 11:42:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:31:03.466 11:42:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.466 11:42:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:03.466 ************************************ 00:31:03.466 START TEST raid_rebuild_test_sb_4k 00:31:03.466 ************************************ 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # local verify=true 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:03.466 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # local strip_size 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # local create_arg 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # local data_offset 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # raid_pid=100851 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # waitforlisten 100851 /var/tmp/spdk-raid.sock 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 100851 ']' 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:03.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:03.467 11:42:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:03.467 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:03.467 Zero copy mechanism will not be used. 00:31:03.467 [2024-07-25 11:42:19.157937] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:31:03.467 [2024-07-25 11:42:19.158083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100851 ] 00:31:03.467 [2024-07-25 11:42:19.326973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.725 [2024-07-25 11:42:19.586641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.986 [2024-07-25 11:42:19.795686] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:03.986 [2024-07-25 11:42:19.795729] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:04.555 11:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:04.555 11:42:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:31:04.555 11:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:04.555 11:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:31:04.555 BaseBdev1_malloc 00:31:04.555 11:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:04.813 [2024-07-25 11:42:20.646394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:04.813 [2024-07-25 11:42:20.646716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:04.813 [2024-07-25 11:42:20.646775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:04.813 [2024-07-25 11:42:20.646794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:04.813 [2024-07-25 11:42:20.649624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:04.813 [2024-07-25 11:42:20.649678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:04.813 BaseBdev1 00:31:04.813 11:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:04.813 11:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:31:05.071 BaseBdev2_malloc 00:31:05.329 11:42:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:05.610 [2024-07-25 11:42:21.213629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:05.610 [2024-07-25 11:42:21.213824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.610 [2024-07-25 11:42:21.213895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:05.610 [2024-07-25 11:42:21.213926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.610 [2024-07-25 11:42:21.217934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.610 [2024-07-25 11:42:21.217991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:05.610 BaseBdev2 00:31:05.610 11:42:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:31:05.868 spare_malloc 00:31:05.868 11:42:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:06.126 spare_delay 00:31:06.126 11:42:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:06.385 [2024-07-25 11:42:22.079817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:06.385 [2024-07-25 11:42:22.079914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:06.385 [2024-07-25 11:42:22.079957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:06.385 [2024-07-25 11:42:22.079974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:06.385 [2024-07-25 11:42:22.082914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:06.385 [2024-07-25 11:42:22.082958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:06.385 spare 00:31:06.385 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:06.643 [2024-07-25 11:42:22.399983] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:06.643 [2024-07-25 11:42:22.402563] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:06.643 [2024-07-25 11:42:22.402842] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:06.643 [2024-07-25 11:42:22.402860] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:06.643 [2024-07-25 11:42:22.403237] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:06.643 [2024-07-25 11:42:22.403496] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:06.643 [2024-07-25 11:42:22.403518] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:06.643 [2024-07-25 11:42:22.403756] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.643 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.900 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:06.900 "name": "raid_bdev1", 00:31:06.900 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:06.900 "strip_size_kb": 0, 00:31:06.900 "state": "online", 00:31:06.900 "raid_level": "raid1", 00:31:06.900 "superblock": true, 00:31:06.900 "num_base_bdevs": 2, 00:31:06.900 "num_base_bdevs_discovered": 2, 00:31:06.900 "num_base_bdevs_operational": 2, 00:31:06.900 "base_bdevs_list": [ 00:31:06.900 { 00:31:06.900 "name": "BaseBdev1", 00:31:06.900 "uuid": "b3493c7e-e548-5515-a694-2ffd1aac7279", 00:31:06.901 "is_configured": true, 00:31:06.901 "data_offset": 256, 00:31:06.901 "data_size": 7936 00:31:06.901 }, 00:31:06.901 { 00:31:06.901 "name": "BaseBdev2", 00:31:06.901 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:06.901 "is_configured": true, 00:31:06.901 "data_offset": 256, 00:31:06.901 "data_size": 7936 00:31:06.901 } 00:31:06.901 ] 00:31:06.901 }' 00:31:06.901 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:06.901 11:42:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:07.467 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:07.467 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:31:07.724 [2024-07-25 11:42:23.540731] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:07.724 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:31:07.724 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:07.724 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:07.981 11:42:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:08.549 [2024-07-25 11:42:24.144711] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:08.549 /dev/nbd0 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:08.549 1+0 records in 00:31:08.549 1+0 records out 00:31:08.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518411 s, 7.9 MB/s 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:31:08.549 11:42:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:31:09.537 7936+0 records in 00:31:09.537 7936+0 records out 00:31:09.537 32505856 bytes (33 MB, 31 MiB) copied, 0.996029 s, 32.6 MB/s 00:31:09.537 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:09.537 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:09.537 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:09.537 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:09.537 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:31:09.537 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:09.537 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:09.795 [2024-07-25 11:42:25.447341] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:09.795 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:09.795 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:09.795 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:09.795 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:09.795 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:09.795 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:09.795 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:09.795 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:09.795 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:10.053 [2024-07-25 11:42:25.731549] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.053 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.311 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:10.311 "name": "raid_bdev1", 00:31:10.311 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:10.311 "strip_size_kb": 0, 00:31:10.311 "state": "online", 00:31:10.311 "raid_level": "raid1", 00:31:10.311 "superblock": true, 00:31:10.311 "num_base_bdevs": 2, 00:31:10.311 "num_base_bdevs_discovered": 1, 00:31:10.311 "num_base_bdevs_operational": 1, 00:31:10.311 "base_bdevs_list": [ 00:31:10.311 { 00:31:10.311 "name": null, 00:31:10.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.311 "is_configured": false, 00:31:10.311 "data_offset": 256, 00:31:10.311 "data_size": 7936 00:31:10.311 }, 00:31:10.311 { 00:31:10.311 "name": "BaseBdev2", 00:31:10.311 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:10.311 "is_configured": true, 00:31:10.311 "data_offset": 256, 00:31:10.311 "data_size": 7936 00:31:10.311 } 00:31:10.311 ] 00:31:10.311 }' 00:31:10.311 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:10.312 11:42:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:10.878 11:42:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:11.136 [2024-07-25 11:42:26.879986] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:11.136 [2024-07-25 11:42:26.895334] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:31:11.136 [2024-07-25 11:42:26.897704] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:11.136 11:42:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:12.073 11:42:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:12.073 11:42:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:12.073 11:42:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:12.073 11:42:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:12.073 11:42:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:12.073 11:42:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:12.073 11:42:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.331 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:12.331 "name": "raid_bdev1", 00:31:12.331 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:12.331 "strip_size_kb": 0, 00:31:12.331 "state": "online", 00:31:12.331 "raid_level": "raid1", 00:31:12.331 "superblock": true, 00:31:12.331 "num_base_bdevs": 2, 00:31:12.331 "num_base_bdevs_discovered": 2, 00:31:12.331 "num_base_bdevs_operational": 2, 00:31:12.331 "process": { 00:31:12.331 "type": "rebuild", 00:31:12.331 "target": "spare", 00:31:12.331 "progress": { 00:31:12.331 "blocks": 3072, 00:31:12.331 "percent": 38 00:31:12.331 } 00:31:12.331 }, 00:31:12.331 "base_bdevs_list": [ 00:31:12.331 { 00:31:12.331 "name": "spare", 00:31:12.331 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:12.331 "is_configured": true, 00:31:12.331 "data_offset": 256, 00:31:12.331 "data_size": 7936 00:31:12.331 }, 00:31:12.331 { 00:31:12.331 "name": "BaseBdev2", 00:31:12.331 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:12.331 "is_configured": true, 00:31:12.331 "data_offset": 256, 00:31:12.331 "data_size": 7936 00:31:12.331 } 00:31:12.331 ] 00:31:12.331 }' 00:31:12.331 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:12.589 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:12.589 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:12.589 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:12.589 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:12.847 [2024-07-25 11:42:28.519758] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:12.847 [2024-07-25 11:42:28.609877] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:12.847 [2024-07-25 11:42:28.610002] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:12.847 [2024-07-25 11:42:28.610030] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:12.847 [2024-07-25 11:42:28.610044] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:12.847 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.848 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.106 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.106 "name": "raid_bdev1", 00:31:13.106 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:13.106 "strip_size_kb": 0, 00:31:13.106 "state": "online", 00:31:13.106 "raid_level": "raid1", 00:31:13.106 "superblock": true, 00:31:13.106 "num_base_bdevs": 2, 00:31:13.106 "num_base_bdevs_discovered": 1, 00:31:13.106 "num_base_bdevs_operational": 1, 00:31:13.106 "base_bdevs_list": [ 00:31:13.106 { 00:31:13.106 "name": null, 00:31:13.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.106 "is_configured": false, 00:31:13.106 "data_offset": 256, 00:31:13.106 "data_size": 7936 00:31:13.106 }, 00:31:13.106 { 00:31:13.106 "name": "BaseBdev2", 00:31:13.106 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:13.106 "is_configured": true, 00:31:13.106 "data_offset": 256, 00:31:13.106 "data_size": 7936 00:31:13.106 } 00:31:13.106 ] 00:31:13.106 }' 00:31:13.106 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.106 11:42:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:14.042 "name": "raid_bdev1", 00:31:14.042 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:14.042 "strip_size_kb": 0, 00:31:14.042 "state": "online", 00:31:14.042 "raid_level": "raid1", 00:31:14.042 "superblock": true, 00:31:14.042 "num_base_bdevs": 2, 00:31:14.042 "num_base_bdevs_discovered": 1, 00:31:14.042 "num_base_bdevs_operational": 1, 00:31:14.042 "base_bdevs_list": [ 00:31:14.042 { 00:31:14.042 "name": null, 00:31:14.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.042 "is_configured": false, 00:31:14.042 "data_offset": 256, 00:31:14.042 "data_size": 7936 00:31:14.042 }, 00:31:14.042 { 00:31:14.042 "name": "BaseBdev2", 00:31:14.042 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:14.042 "is_configured": true, 00:31:14.042 "data_offset": 256, 00:31:14.042 "data_size": 7936 00:31:14.042 } 00:31:14.042 ] 00:31:14.042 }' 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:14.042 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:14.300 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:14.300 11:42:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:14.557 [2024-07-25 11:42:30.213100] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:14.558 [2024-07-25 11:42:30.227879] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:31:14.558 [2024-07-25 11:42:30.230327] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:14.558 11:42:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@678 -- # sleep 1 00:31:15.493 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:15.493 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:15.493 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:15.493 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:15.493 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:15.493 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.493 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:15.751 "name": "raid_bdev1", 00:31:15.751 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:15.751 "strip_size_kb": 0, 00:31:15.751 "state": "online", 00:31:15.751 "raid_level": "raid1", 00:31:15.751 "superblock": true, 00:31:15.751 "num_base_bdevs": 2, 00:31:15.751 "num_base_bdevs_discovered": 2, 00:31:15.751 "num_base_bdevs_operational": 2, 00:31:15.751 "process": { 00:31:15.751 "type": "rebuild", 00:31:15.751 "target": "spare", 00:31:15.751 "progress": { 00:31:15.751 "blocks": 3072, 00:31:15.751 "percent": 38 00:31:15.751 } 00:31:15.751 }, 00:31:15.751 "base_bdevs_list": [ 00:31:15.751 { 00:31:15.751 "name": "spare", 00:31:15.751 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:15.751 "is_configured": true, 00:31:15.751 "data_offset": 256, 00:31:15.751 "data_size": 7936 00:31:15.751 }, 00:31:15.751 { 00:31:15.751 "name": "BaseBdev2", 00:31:15.751 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:15.751 "is_configured": true, 00:31:15.751 "data_offset": 256, 00:31:15.751 "data_size": 7936 00:31:15.751 } 00:31:15.751 ] 00:31:15.751 }' 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:31:15.751 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # local timeout=1515 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.751 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.010 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:16.010 "name": "raid_bdev1", 00:31:16.010 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:16.010 "strip_size_kb": 0, 00:31:16.010 "state": "online", 00:31:16.010 "raid_level": "raid1", 00:31:16.010 "superblock": true, 00:31:16.010 "num_base_bdevs": 2, 00:31:16.010 "num_base_bdevs_discovered": 2, 00:31:16.010 "num_base_bdevs_operational": 2, 00:31:16.010 "process": { 00:31:16.010 "type": "rebuild", 00:31:16.010 "target": "spare", 00:31:16.010 "progress": { 00:31:16.010 "blocks": 4096, 00:31:16.010 "percent": 51 00:31:16.010 } 00:31:16.010 }, 00:31:16.010 "base_bdevs_list": [ 00:31:16.010 { 00:31:16.010 "name": "spare", 00:31:16.010 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:16.010 "is_configured": true, 00:31:16.010 "data_offset": 256, 00:31:16.010 "data_size": 7936 00:31:16.010 }, 00:31:16.010 { 00:31:16.010 "name": "BaseBdev2", 00:31:16.010 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:16.010 "is_configured": true, 00:31:16.010 "data_offset": 256, 00:31:16.010 "data_size": 7936 00:31:16.010 } 00:31:16.010 ] 00:31:16.010 }' 00:31:16.010 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:16.275 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:16.275 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:16.275 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:16.275 11:42:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:17.210 11:42:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:17.210 11:42:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:17.210 11:42:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:17.210 11:42:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:17.210 11:42:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:17.210 11:42:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:17.210 11:42:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.210 11:42:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.469 11:42:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:17.469 "name": "raid_bdev1", 00:31:17.469 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:17.469 "strip_size_kb": 0, 00:31:17.469 "state": "online", 00:31:17.469 "raid_level": "raid1", 00:31:17.469 "superblock": true, 00:31:17.469 "num_base_bdevs": 2, 00:31:17.469 "num_base_bdevs_discovered": 2, 00:31:17.469 "num_base_bdevs_operational": 2, 00:31:17.469 "process": { 00:31:17.469 "type": "rebuild", 00:31:17.469 "target": "spare", 00:31:17.469 "progress": { 00:31:17.469 "blocks": 7424, 00:31:17.469 "percent": 93 00:31:17.469 } 00:31:17.469 }, 00:31:17.469 "base_bdevs_list": [ 00:31:17.469 { 00:31:17.469 "name": "spare", 00:31:17.469 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:17.469 "is_configured": true, 00:31:17.469 "data_offset": 256, 00:31:17.469 "data_size": 7936 00:31:17.469 }, 00:31:17.469 { 00:31:17.469 "name": "BaseBdev2", 00:31:17.469 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:17.469 "is_configured": true, 00:31:17.469 "data_offset": 256, 00:31:17.469 "data_size": 7936 00:31:17.469 } 00:31:17.469 ] 00:31:17.469 }' 00:31:17.469 11:42:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:17.469 11:42:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:17.469 11:42:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:17.469 11:42:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:17.469 11:42:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:17.727 [2024-07-25 11:42:33.353692] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:17.727 [2024-07-25 11:42:33.353797] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:17.727 [2024-07-25 11:42:33.353931] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:18.661 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:18.661 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:18.661 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:18.661 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:18.661 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:18.661 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:18.661 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.661 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:18.920 "name": "raid_bdev1", 00:31:18.920 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:18.920 "strip_size_kb": 0, 00:31:18.920 "state": "online", 00:31:18.920 "raid_level": "raid1", 00:31:18.920 "superblock": true, 00:31:18.920 "num_base_bdevs": 2, 00:31:18.920 "num_base_bdevs_discovered": 2, 00:31:18.920 "num_base_bdevs_operational": 2, 00:31:18.920 "base_bdevs_list": [ 00:31:18.920 { 00:31:18.920 "name": "spare", 00:31:18.920 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:18.920 "is_configured": true, 00:31:18.920 "data_offset": 256, 00:31:18.920 "data_size": 7936 00:31:18.920 }, 00:31:18.920 { 00:31:18.920 "name": "BaseBdev2", 00:31:18.920 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:18.920 "is_configured": true, 00:31:18.920 "data_offset": 256, 00:31:18.920 "data_size": 7936 00:31:18.920 } 00:31:18.920 ] 00:31:18.920 }' 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # break 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.920 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.178 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:19.178 "name": "raid_bdev1", 00:31:19.178 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:19.178 "strip_size_kb": 0, 00:31:19.178 "state": "online", 00:31:19.178 "raid_level": "raid1", 00:31:19.178 "superblock": true, 00:31:19.178 "num_base_bdevs": 2, 00:31:19.178 "num_base_bdevs_discovered": 2, 00:31:19.178 "num_base_bdevs_operational": 2, 00:31:19.178 "base_bdevs_list": [ 00:31:19.178 { 00:31:19.178 "name": "spare", 00:31:19.178 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:19.178 "is_configured": true, 00:31:19.178 "data_offset": 256, 00:31:19.178 "data_size": 7936 00:31:19.178 }, 00:31:19.178 { 00:31:19.178 "name": "BaseBdev2", 00:31:19.178 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:19.178 "is_configured": true, 00:31:19.178 "data_offset": 256, 00:31:19.178 "data_size": 7936 00:31:19.178 } 00:31:19.178 ] 00:31:19.178 }' 00:31:19.178 11:42:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:19.178 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:19.178 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.437 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.696 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:19.696 "name": "raid_bdev1", 00:31:19.696 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:19.696 "strip_size_kb": 0, 00:31:19.696 "state": "online", 00:31:19.696 "raid_level": "raid1", 00:31:19.696 "superblock": true, 00:31:19.696 "num_base_bdevs": 2, 00:31:19.696 "num_base_bdevs_discovered": 2, 00:31:19.696 "num_base_bdevs_operational": 2, 00:31:19.696 "base_bdevs_list": [ 00:31:19.696 { 00:31:19.696 "name": "spare", 00:31:19.696 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:19.696 "is_configured": true, 00:31:19.696 "data_offset": 256, 00:31:19.696 "data_size": 7936 00:31:19.696 }, 00:31:19.696 { 00:31:19.696 "name": "BaseBdev2", 00:31:19.696 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:19.696 "is_configured": true, 00:31:19.696 "data_offset": 256, 00:31:19.696 "data_size": 7936 00:31:19.696 } 00:31:19.696 ] 00:31:19.696 }' 00:31:19.696 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:19.696 11:42:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:20.263 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:20.521 [2024-07-25 11:42:36.321546] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:20.521 [2024-07-25 11:42:36.321593] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:20.521 [2024-07-25 11:42:36.321727] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:20.521 [2024-07-25 11:42:36.321813] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:20.521 [2024-07-25 11:42:36.321834] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:20.521 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # jq length 00:31:20.521 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:20.782 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:21.040 /dev/nbd0 00:31:21.040 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:21.040 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:21.040 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:21.040 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:31:21.040 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:21.040 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:21.040 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:21.299 1+0 records in 00:31:21.299 1+0 records out 00:31:21.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383544 s, 10.7 MB/s 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:21.299 11:42:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:21.299 /dev/nbd1 00:31:21.299 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:21.299 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:21.299 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:21.299 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:31:21.299 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:21.299 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:21.299 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:21.557 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:31:21.557 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:21.557 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:21.557 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:21.557 1+0 records in 00:31:21.557 1+0 records out 00:31:21.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355173 s, 11.5 MB/s 00:31:21.557 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:21.557 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:31:21.557 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:21.558 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:21.816 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:21.816 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:21.816 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:21.816 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:21.816 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:21.816 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:21.816 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:21.816 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:21.816 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:21.816 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:22.383 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:22.383 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:22.383 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:22.383 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:22.383 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:22.383 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:22.383 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:22.383 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:22.383 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:31:22.383 11:42:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:22.383 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:22.640 [2024-07-25 11:42:38.443479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:22.640 [2024-07-25 11:42:38.443599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:22.640 [2024-07-25 11:42:38.443644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:22.640 [2024-07-25 11:42:38.443665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:22.640 [2024-07-25 11:42:38.446672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:22.640 [2024-07-25 11:42:38.446740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:22.640 [2024-07-25 11:42:38.446864] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:22.640 [2024-07-25 11:42:38.446943] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:22.640 [2024-07-25 11:42:38.447132] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:22.640 spare 00:31:22.640 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:22.640 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:22.640 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:22.640 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:22.640 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:22.640 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:22.640 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:22.640 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:22.640 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:22.640 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:22.641 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.641 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.899 [2024-07-25 11:42:38.547257] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:22.899 [2024-07-25 11:42:38.547292] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:22.899 [2024-07-25 11:42:38.547819] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:31:22.899 [2024-07-25 11:42:38.548097] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:22.899 [2024-07-25 11:42:38.548128] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:31:22.899 [2024-07-25 11:42:38.548346] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.899 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:22.899 "name": "raid_bdev1", 00:31:22.899 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:22.899 "strip_size_kb": 0, 00:31:22.899 "state": "online", 00:31:22.899 "raid_level": "raid1", 00:31:22.899 "superblock": true, 00:31:22.899 "num_base_bdevs": 2, 00:31:22.899 "num_base_bdevs_discovered": 2, 00:31:22.899 "num_base_bdevs_operational": 2, 00:31:22.899 "base_bdevs_list": [ 00:31:22.899 { 00:31:22.899 "name": "spare", 00:31:22.899 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:22.899 "is_configured": true, 00:31:22.899 "data_offset": 256, 00:31:22.899 "data_size": 7936 00:31:22.899 }, 00:31:22.899 { 00:31:22.899 "name": "BaseBdev2", 00:31:22.899 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:22.899 "is_configured": true, 00:31:22.899 "data_offset": 256, 00:31:22.899 "data_size": 7936 00:31:22.899 } 00:31:22.899 ] 00:31:22.899 }' 00:31:22.899 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:22.899 11:42:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:23.466 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:23.466 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:23.466 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:23.466 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:23.466 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:23.466 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.466 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.725 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:23.725 "name": "raid_bdev1", 00:31:23.725 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:23.725 "strip_size_kb": 0, 00:31:23.725 "state": "online", 00:31:23.725 "raid_level": "raid1", 00:31:23.725 "superblock": true, 00:31:23.725 "num_base_bdevs": 2, 00:31:23.725 "num_base_bdevs_discovered": 2, 00:31:23.725 "num_base_bdevs_operational": 2, 00:31:23.725 "base_bdevs_list": [ 00:31:23.725 { 00:31:23.725 "name": "spare", 00:31:23.725 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:23.725 "is_configured": true, 00:31:23.725 "data_offset": 256, 00:31:23.725 "data_size": 7936 00:31:23.725 }, 00:31:23.725 { 00:31:23.725 "name": "BaseBdev2", 00:31:23.725 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:23.725 "is_configured": true, 00:31:23.725 "data_offset": 256, 00:31:23.725 "data_size": 7936 00:31:23.725 } 00:31:23.725 ] 00:31:23.725 }' 00:31:23.725 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:23.984 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:23.984 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:23.984 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:23.984 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.984 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:24.241 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:31:24.241 11:42:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:24.503 [2024-07-25 11:42:40.132935] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.503 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:24.762 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:24.762 "name": "raid_bdev1", 00:31:24.762 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:24.762 "strip_size_kb": 0, 00:31:24.762 "state": "online", 00:31:24.762 "raid_level": "raid1", 00:31:24.762 "superblock": true, 00:31:24.762 "num_base_bdevs": 2, 00:31:24.762 "num_base_bdevs_discovered": 1, 00:31:24.762 "num_base_bdevs_operational": 1, 00:31:24.762 "base_bdevs_list": [ 00:31:24.762 { 00:31:24.762 "name": null, 00:31:24.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.762 "is_configured": false, 00:31:24.762 "data_offset": 256, 00:31:24.762 "data_size": 7936 00:31:24.762 }, 00:31:24.762 { 00:31:24.762 "name": "BaseBdev2", 00:31:24.762 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:24.762 "is_configured": true, 00:31:24.762 "data_offset": 256, 00:31:24.762 "data_size": 7936 00:31:24.762 } 00:31:24.762 ] 00:31:24.762 }' 00:31:24.762 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:24.762 11:42:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:25.329 11:42:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:25.587 [2024-07-25 11:42:41.353294] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:25.587 [2024-07-25 11:42:41.353569] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:25.587 [2024-07-25 11:42:41.353592] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:25.587 [2024-07-25 11:42:41.353660] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:25.587 [2024-07-25 11:42:41.367778] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:31:25.587 [2024-07-25 11:42:41.370206] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:25.587 11:42:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # sleep 1 00:31:26.521 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:26.521 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:26.521 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:26.521 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:26.521 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:26.521 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.521 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.089 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:27.089 "name": "raid_bdev1", 00:31:27.089 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:27.089 "strip_size_kb": 0, 00:31:27.089 "state": "online", 00:31:27.089 "raid_level": "raid1", 00:31:27.089 "superblock": true, 00:31:27.089 "num_base_bdevs": 2, 00:31:27.089 "num_base_bdevs_discovered": 2, 00:31:27.089 "num_base_bdevs_operational": 2, 00:31:27.089 "process": { 00:31:27.089 "type": "rebuild", 00:31:27.089 "target": "spare", 00:31:27.089 "progress": { 00:31:27.089 "blocks": 3072, 00:31:27.089 "percent": 38 00:31:27.089 } 00:31:27.089 }, 00:31:27.089 "base_bdevs_list": [ 00:31:27.089 { 00:31:27.089 "name": "spare", 00:31:27.089 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:27.089 "is_configured": true, 00:31:27.089 "data_offset": 256, 00:31:27.089 "data_size": 7936 00:31:27.089 }, 00:31:27.089 { 00:31:27.089 "name": "BaseBdev2", 00:31:27.089 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:27.089 "is_configured": true, 00:31:27.089 "data_offset": 256, 00:31:27.089 "data_size": 7936 00:31:27.089 } 00:31:27.089 ] 00:31:27.089 }' 00:31:27.089 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:27.089 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:27.089 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:27.089 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:27.089 11:42:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:27.349 [2024-07-25 11:42:43.024557] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:27.349 [2024-07-25 11:42:43.082416] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:27.349 [2024-07-25 11:42:43.082527] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:27.349 [2024-07-25 11:42:43.082556] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:27.349 [2024-07-25 11:42:43.082568] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.349 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.646 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:27.646 "name": "raid_bdev1", 00:31:27.646 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:27.646 "strip_size_kb": 0, 00:31:27.646 "state": "online", 00:31:27.646 "raid_level": "raid1", 00:31:27.646 "superblock": true, 00:31:27.646 "num_base_bdevs": 2, 00:31:27.646 "num_base_bdevs_discovered": 1, 00:31:27.646 "num_base_bdevs_operational": 1, 00:31:27.646 "base_bdevs_list": [ 00:31:27.646 { 00:31:27.646 "name": null, 00:31:27.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.646 "is_configured": false, 00:31:27.646 "data_offset": 256, 00:31:27.646 "data_size": 7936 00:31:27.646 }, 00:31:27.646 { 00:31:27.646 "name": "BaseBdev2", 00:31:27.646 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:27.646 "is_configured": true, 00:31:27.646 "data_offset": 256, 00:31:27.646 "data_size": 7936 00:31:27.646 } 00:31:27.646 ] 00:31:27.646 }' 00:31:27.646 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:27.646 11:42:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:28.580 11:42:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:28.580 [2024-07-25 11:42:44.312089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:28.580 [2024-07-25 11:42:44.312178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:28.580 [2024-07-25 11:42:44.312214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:31:28.580 [2024-07-25 11:42:44.312230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:28.580 [2024-07-25 11:42:44.312905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:28.580 [2024-07-25 11:42:44.312942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:28.580 [2024-07-25 11:42:44.313061] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:28.580 [2024-07-25 11:42:44.313081] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:28.580 [2024-07-25 11:42:44.313098] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:28.580 [2024-07-25 11:42:44.313134] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:28.580 [2024-07-25 11:42:44.327494] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:31:28.580 spare 00:31:28.580 [2024-07-25 11:42:44.329896] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:28.580 11:42:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # sleep 1 00:31:29.514 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:29.514 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:29.514 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:29.514 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:29.514 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:29.514 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.514 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:29.773 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:29.773 "name": "raid_bdev1", 00:31:29.773 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:29.773 "strip_size_kb": 0, 00:31:29.773 "state": "online", 00:31:29.773 "raid_level": "raid1", 00:31:29.773 "superblock": true, 00:31:29.773 "num_base_bdevs": 2, 00:31:29.773 "num_base_bdevs_discovered": 2, 00:31:29.773 "num_base_bdevs_operational": 2, 00:31:29.773 "process": { 00:31:29.773 "type": "rebuild", 00:31:29.773 "target": "spare", 00:31:29.773 "progress": { 00:31:29.773 "blocks": 3072, 00:31:29.773 "percent": 38 00:31:29.773 } 00:31:29.773 }, 00:31:29.773 "base_bdevs_list": [ 00:31:29.773 { 00:31:29.773 "name": "spare", 00:31:29.773 "uuid": "6ab8641a-212c-52da-911e-cffa13f18b58", 00:31:29.773 "is_configured": true, 00:31:29.773 "data_offset": 256, 00:31:29.773 "data_size": 7936 00:31:29.773 }, 00:31:29.773 { 00:31:29.773 "name": "BaseBdev2", 00:31:29.773 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:29.773 "is_configured": true, 00:31:29.773 "data_offset": 256, 00:31:29.773 "data_size": 7936 00:31:29.773 } 00:31:29.773 ] 00:31:29.773 }' 00:31:29.773 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:30.031 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:30.031 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:30.031 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:30.031 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:30.290 [2024-07-25 11:42:45.924213] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:30.290 [2024-07-25 11:42:45.941401] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:30.290 [2024-07-25 11:42:45.941488] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:30.290 [2024-07-25 11:42:45.941513] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:30.290 [2024-07-25 11:42:45.941529] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:30.290 11:42:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.548 11:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:30.548 "name": "raid_bdev1", 00:31:30.548 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:30.548 "strip_size_kb": 0, 00:31:30.548 "state": "online", 00:31:30.548 "raid_level": "raid1", 00:31:30.548 "superblock": true, 00:31:30.548 "num_base_bdevs": 2, 00:31:30.548 "num_base_bdevs_discovered": 1, 00:31:30.548 "num_base_bdevs_operational": 1, 00:31:30.548 "base_bdevs_list": [ 00:31:30.548 { 00:31:30.548 "name": null, 00:31:30.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.548 "is_configured": false, 00:31:30.548 "data_offset": 256, 00:31:30.548 "data_size": 7936 00:31:30.548 }, 00:31:30.548 { 00:31:30.548 "name": "BaseBdev2", 00:31:30.548 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:30.548 "is_configured": true, 00:31:30.548 "data_offset": 256, 00:31:30.548 "data_size": 7936 00:31:30.548 } 00:31:30.548 ] 00:31:30.548 }' 00:31:30.548 11:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:30.548 11:42:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:31.114 11:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:31.114 11:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:31.114 11:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:31.114 11:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:31.114 11:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:31.114 11:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.114 11:42:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.373 11:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:31.373 "name": "raid_bdev1", 00:31:31.373 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:31.373 "strip_size_kb": 0, 00:31:31.373 "state": "online", 00:31:31.373 "raid_level": "raid1", 00:31:31.373 "superblock": true, 00:31:31.373 "num_base_bdevs": 2, 00:31:31.373 "num_base_bdevs_discovered": 1, 00:31:31.373 "num_base_bdevs_operational": 1, 00:31:31.373 "base_bdevs_list": [ 00:31:31.373 { 00:31:31.373 "name": null, 00:31:31.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.373 "is_configured": false, 00:31:31.373 "data_offset": 256, 00:31:31.373 "data_size": 7936 00:31:31.373 }, 00:31:31.373 { 00:31:31.373 "name": "BaseBdev2", 00:31:31.373 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:31.373 "is_configured": true, 00:31:31.373 "data_offset": 256, 00:31:31.373 "data_size": 7936 00:31:31.373 } 00:31:31.373 ] 00:31:31.373 }' 00:31:31.373 11:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:31.373 11:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:31.373 11:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:31.373 11:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:31.373 11:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:31.631 11:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:31.889 [2024-07-25 11:42:47.747279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:31.889 [2024-07-25 11:42:47.747369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:31.889 [2024-07-25 11:42:47.747402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:31:31.889 [2024-07-25 11:42:47.747421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:31.889 [2024-07-25 11:42:47.747994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:31.889 [2024-07-25 11:42:47.748025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:31.889 [2024-07-25 11:42:47.748129] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:31.889 [2024-07-25 11:42:47.748161] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:31.889 [2024-07-25 11:42:47.748174] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:31.889 BaseBdev1 00:31:31.889 11:42:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # sleep 1 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.265 11:42:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.265 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:33.265 "name": "raid_bdev1", 00:31:33.265 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:33.265 "strip_size_kb": 0, 00:31:33.265 "state": "online", 00:31:33.265 "raid_level": "raid1", 00:31:33.266 "superblock": true, 00:31:33.266 "num_base_bdevs": 2, 00:31:33.266 "num_base_bdevs_discovered": 1, 00:31:33.266 "num_base_bdevs_operational": 1, 00:31:33.266 "base_bdevs_list": [ 00:31:33.266 { 00:31:33.266 "name": null, 00:31:33.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.266 "is_configured": false, 00:31:33.266 "data_offset": 256, 00:31:33.266 "data_size": 7936 00:31:33.266 }, 00:31:33.266 { 00:31:33.266 "name": "BaseBdev2", 00:31:33.266 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:33.266 "is_configured": true, 00:31:33.266 "data_offset": 256, 00:31:33.266 "data_size": 7936 00:31:33.266 } 00:31:33.266 ] 00:31:33.266 }' 00:31:33.266 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:33.266 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:33.832 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:33.832 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:33.832 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:33.832 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:33.832 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:33.832 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.832 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.090 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:34.090 "name": "raid_bdev1", 00:31:34.090 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:34.090 "strip_size_kb": 0, 00:31:34.090 "state": "online", 00:31:34.090 "raid_level": "raid1", 00:31:34.090 "superblock": true, 00:31:34.090 "num_base_bdevs": 2, 00:31:34.090 "num_base_bdevs_discovered": 1, 00:31:34.090 "num_base_bdevs_operational": 1, 00:31:34.090 "base_bdevs_list": [ 00:31:34.090 { 00:31:34.090 "name": null, 00:31:34.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.090 "is_configured": false, 00:31:34.090 "data_offset": 256, 00:31:34.090 "data_size": 7936 00:31:34.090 }, 00:31:34.090 { 00:31:34.090 "name": "BaseBdev2", 00:31:34.090 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:34.090 "is_configured": true, 00:31:34.090 "data_offset": 256, 00:31:34.090 "data_size": 7936 00:31:34.090 } 00:31:34.090 ] 00:31:34.090 }' 00:31:34.090 11:42:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:34.348 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:34.605 [2024-07-25 11:42:50.307959] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:34.605 [2024-07-25 11:42:50.308158] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:34.605 [2024-07-25 11:42:50.308188] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:34.605 request: 00:31:34.605 { 00:31:34.605 "base_bdev": "BaseBdev1", 00:31:34.605 "raid_bdev": "raid_bdev1", 00:31:34.605 "method": "bdev_raid_add_base_bdev", 00:31:34.605 "req_id": 1 00:31:34.605 } 00:31:34.605 Got JSON-RPC error response 00:31:34.605 response: 00:31:34.605 { 00:31:34.605 "code": -22, 00:31:34.605 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:34.605 } 00:31:34.605 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:31:34.605 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:34.605 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:34.605 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:34.605 11:42:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@793 -- # sleep 1 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.606 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.865 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:35.865 "name": "raid_bdev1", 00:31:35.865 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:35.865 "strip_size_kb": 0, 00:31:35.865 "state": "online", 00:31:35.865 "raid_level": "raid1", 00:31:35.865 "superblock": true, 00:31:35.865 "num_base_bdevs": 2, 00:31:35.865 "num_base_bdevs_discovered": 1, 00:31:35.865 "num_base_bdevs_operational": 1, 00:31:35.865 "base_bdevs_list": [ 00:31:35.865 { 00:31:35.865 "name": null, 00:31:35.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.865 "is_configured": false, 00:31:35.865 "data_offset": 256, 00:31:35.865 "data_size": 7936 00:31:35.865 }, 00:31:35.865 { 00:31:35.865 "name": "BaseBdev2", 00:31:35.865 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:35.865 "is_configured": true, 00:31:35.865 "data_offset": 256, 00:31:35.865 "data_size": 7936 00:31:35.865 } 00:31:35.865 ] 00:31:35.865 }' 00:31:35.865 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:35.865 11:42:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:36.432 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:36.432 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:36.432 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:36.432 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:36.432 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:36.432 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.432 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.690 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:36.690 "name": "raid_bdev1", 00:31:36.690 "uuid": "b4d7cfc9-f890-4675-b005-11ebaa7a13fa", 00:31:36.690 "strip_size_kb": 0, 00:31:36.690 "state": "online", 00:31:36.690 "raid_level": "raid1", 00:31:36.690 "superblock": true, 00:31:36.690 "num_base_bdevs": 2, 00:31:36.690 "num_base_bdevs_discovered": 1, 00:31:36.690 "num_base_bdevs_operational": 1, 00:31:36.690 "base_bdevs_list": [ 00:31:36.690 { 00:31:36.690 "name": null, 00:31:36.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:36.690 "is_configured": false, 00:31:36.690 "data_offset": 256, 00:31:36.690 "data_size": 7936 00:31:36.690 }, 00:31:36.690 { 00:31:36.690 "name": "BaseBdev2", 00:31:36.690 "uuid": "cf3c56bd-ef01-5591-9cb9-3adb36137170", 00:31:36.690 "is_configured": true, 00:31:36.690 "data_offset": 256, 00:31:36.690 "data_size": 7936 00:31:36.690 } 00:31:36.690 ] 00:31:36.690 }' 00:31:36.690 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:36.690 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:36.690 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@798 -- # killprocess 100851 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 100851 ']' 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 100851 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100851 00:31:36.949 killing process with pid 100851 00:31:36.949 Received shutdown signal, test time was about 60.000000 seconds 00:31:36.949 00:31:36.949 Latency(us) 00:31:36.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.949 =================================================================================================================== 00:31:36.949 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100851' 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 100851 00:31:36.949 11:42:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 100851 00:31:36.949 [2024-07-25 11:42:52.611364] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:36.949 [2024-07-25 11:42:52.611533] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:36.949 [2024-07-25 11:42:52.611605] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:36.949 [2024-07-25 11:42:52.611643] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:31:37.207 [2024-07-25 11:42:52.880307] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:38.650 11:42:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@800 -- # return 0 00:31:38.650 00:31:38.650 real 0m34.974s 00:31:38.650 user 0m55.148s 00:31:38.650 sys 0m4.234s 00:31:38.650 ************************************ 00:31:38.650 END TEST raid_rebuild_test_sb_4k 00:31:38.650 ************************************ 00:31:38.650 11:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:38.650 11:42:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:38.650 11:42:54 bdev_raid -- bdev/bdev_raid.sh@982 -- # base_malloc_params='-m 32' 00:31:38.650 11:42:54 bdev_raid -- bdev/bdev_raid.sh@983 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:31:38.650 11:42:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:38.650 11:42:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:38.650 11:42:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:38.650 ************************************ 00:31:38.650 START TEST raid_state_function_test_sb_md_separate 00:31:38.650 ************************************ 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:38.650 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=101697 00:31:38.651 Process raid pid: 101697 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 101697' 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 101697 /var/tmp/spdk-raid.sock 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 101697 ']' 00:31:38.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:38.651 11:42:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:38.651 [2024-07-25 11:42:54.200079] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:31:38.651 [2024-07-25 11:42:54.200259] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.651 [2024-07-25 11:42:54.376866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.909 [2024-07-25 11:42:54.617080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.167 [2024-07-25 11:42:54.820995] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:39.167 [2024-07-25 11:42:54.821065] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:39.426 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:39.426 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:31:39.426 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:39.684 [2024-07-25 11:42:55.346562] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:39.684 [2024-07-25 11:42:55.346648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:39.684 [2024-07-25 11:42:55.346670] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:39.684 [2024-07-25 11:42:55.346684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.684 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:39.943 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:39.943 "name": "Existed_Raid", 00:31:39.943 "uuid": "7d983618-531d-4743-94aa-7d8e26632bfe", 00:31:39.943 "strip_size_kb": 0, 00:31:39.943 "state": "configuring", 00:31:39.943 "raid_level": "raid1", 00:31:39.943 "superblock": true, 00:31:39.943 "num_base_bdevs": 2, 00:31:39.943 "num_base_bdevs_discovered": 0, 00:31:39.943 "num_base_bdevs_operational": 2, 00:31:39.943 "base_bdevs_list": [ 00:31:39.943 { 00:31:39.943 "name": "BaseBdev1", 00:31:39.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.943 "is_configured": false, 00:31:39.943 "data_offset": 0, 00:31:39.943 "data_size": 0 00:31:39.943 }, 00:31:39.943 { 00:31:39.943 "name": "BaseBdev2", 00:31:39.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.943 "is_configured": false, 00:31:39.943 "data_offset": 0, 00:31:39.943 "data_size": 0 00:31:39.943 } 00:31:39.943 ] 00:31:39.943 }' 00:31:39.943 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:39.943 11:42:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:40.509 11:42:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:40.768 [2024-07-25 11:42:56.466713] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:40.768 [2024-07-25 11:42:56.466762] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:40.768 11:42:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:41.026 [2024-07-25 11:42:56.742800] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:41.026 [2024-07-25 11:42:56.742865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:41.026 [2024-07-25 11:42:56.742885] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:41.026 [2024-07-25 11:42:56.742898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:41.026 11:42:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:31:41.285 [2024-07-25 11:42:57.007876] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:41.285 BaseBdev1 00:31:41.285 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:41.285 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:41.285 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:41.285 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:31:41.285 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:41.285 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:41.285 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:41.543 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:41.801 [ 00:31:41.801 { 00:31:41.801 "name": "BaseBdev1", 00:31:41.801 "aliases": [ 00:31:41.801 "aed9cdc3-7674-404d-bcd3-11e04425d0b3" 00:31:41.801 ], 00:31:41.801 "product_name": "Malloc disk", 00:31:41.801 "block_size": 4096, 00:31:41.801 "num_blocks": 8192, 00:31:41.801 "uuid": "aed9cdc3-7674-404d-bcd3-11e04425d0b3", 00:31:41.801 "md_size": 32, 00:31:41.801 "md_interleave": false, 00:31:41.801 "dif_type": 0, 00:31:41.801 "assigned_rate_limits": { 00:31:41.801 "rw_ios_per_sec": 0, 00:31:41.801 "rw_mbytes_per_sec": 0, 00:31:41.801 "r_mbytes_per_sec": 0, 00:31:41.801 "w_mbytes_per_sec": 0 00:31:41.801 }, 00:31:41.801 "claimed": true, 00:31:41.801 "claim_type": "exclusive_write", 00:31:41.801 "zoned": false, 00:31:41.802 "supported_io_types": { 00:31:41.802 "read": true, 00:31:41.802 "write": true, 00:31:41.802 "unmap": true, 00:31:41.802 "flush": true, 00:31:41.802 "reset": true, 00:31:41.802 "nvme_admin": false, 00:31:41.802 "nvme_io": false, 00:31:41.802 "nvme_io_md": false, 00:31:41.802 "write_zeroes": true, 00:31:41.802 "zcopy": true, 00:31:41.802 "get_zone_info": false, 00:31:41.802 "zone_management": false, 00:31:41.802 "zone_append": false, 00:31:41.802 "compare": false, 00:31:41.802 "compare_and_write": false, 00:31:41.802 "abort": true, 00:31:41.802 "seek_hole": false, 00:31:41.802 "seek_data": false, 00:31:41.802 "copy": true, 00:31:41.802 "nvme_iov_md": false 00:31:41.802 }, 00:31:41.802 "memory_domains": [ 00:31:41.802 { 00:31:41.802 "dma_device_id": "system", 00:31:41.802 "dma_device_type": 1 00:31:41.802 }, 00:31:41.802 { 00:31:41.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:41.802 "dma_device_type": 2 00:31:41.802 } 00:31:41.802 ], 00:31:41.802 "driver_specific": {} 00:31:41.802 } 00:31:41.802 ] 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.802 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:42.060 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:42.060 "name": "Existed_Raid", 00:31:42.060 "uuid": "ce6db24c-f3f1-40ee-9dfb-8c82cde22bc9", 00:31:42.060 "strip_size_kb": 0, 00:31:42.060 "state": "configuring", 00:31:42.060 "raid_level": "raid1", 00:31:42.060 "superblock": true, 00:31:42.060 "num_base_bdevs": 2, 00:31:42.060 "num_base_bdevs_discovered": 1, 00:31:42.060 "num_base_bdevs_operational": 2, 00:31:42.060 "base_bdevs_list": [ 00:31:42.060 { 00:31:42.060 "name": "BaseBdev1", 00:31:42.060 "uuid": "aed9cdc3-7674-404d-bcd3-11e04425d0b3", 00:31:42.060 "is_configured": true, 00:31:42.060 "data_offset": 256, 00:31:42.060 "data_size": 7936 00:31:42.060 }, 00:31:42.060 { 00:31:42.060 "name": "BaseBdev2", 00:31:42.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.060 "is_configured": false, 00:31:42.060 "data_offset": 0, 00:31:42.060 "data_size": 0 00:31:42.060 } 00:31:42.060 ] 00:31:42.060 }' 00:31:42.060 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:42.060 11:42:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:42.628 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:42.887 [2024-07-25 11:42:58.704374] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:42.887 [2024-07-25 11:42:58.704448] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:42.887 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:43.145 [2024-07-25 11:42:58.936545] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:43.145 [2024-07-25 11:42:58.938875] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:43.145 [2024-07-25 11:42:58.938925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.145 11:42:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:43.403 11:42:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:43.403 "name": "Existed_Raid", 00:31:43.403 "uuid": "f6a210f7-4df5-4e06-aaff-1b75a6c88417", 00:31:43.403 "strip_size_kb": 0, 00:31:43.403 "state": "configuring", 00:31:43.403 "raid_level": "raid1", 00:31:43.403 "superblock": true, 00:31:43.403 "num_base_bdevs": 2, 00:31:43.403 "num_base_bdevs_discovered": 1, 00:31:43.403 "num_base_bdevs_operational": 2, 00:31:43.403 "base_bdevs_list": [ 00:31:43.403 { 00:31:43.403 "name": "BaseBdev1", 00:31:43.403 "uuid": "aed9cdc3-7674-404d-bcd3-11e04425d0b3", 00:31:43.403 "is_configured": true, 00:31:43.403 "data_offset": 256, 00:31:43.403 "data_size": 7936 00:31:43.403 }, 00:31:43.403 { 00:31:43.403 "name": "BaseBdev2", 00:31:43.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.403 "is_configured": false, 00:31:43.403 "data_offset": 0, 00:31:43.403 "data_size": 0 00:31:43.403 } 00:31:43.403 ] 00:31:43.403 }' 00:31:43.403 11:42:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:43.403 11:42:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:43.970 11:42:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:31:44.253 [2024-07-25 11:43:00.110475] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:44.253 [2024-07-25 11:43:00.110789] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:44.253 [2024-07-25 11:43:00.110813] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:44.253 [2024-07-25 11:43:00.110913] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:44.253 [2024-07-25 11:43:00.111067] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:44.253 [2024-07-25 11:43:00.111082] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:44.253 BaseBdev2 00:31:44.253 [2024-07-25 11:43:00.111217] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.253 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:44.253 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:44.253 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:44.253 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:31:44.253 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:44.253 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:44.253 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:44.511 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:44.770 [ 00:31:44.770 { 00:31:44.770 "name": "BaseBdev2", 00:31:44.770 "aliases": [ 00:31:44.770 "fdbe2a45-be76-4dd1-b833-92da6f61102b" 00:31:44.770 ], 00:31:44.770 "product_name": "Malloc disk", 00:31:44.770 "block_size": 4096, 00:31:44.770 "num_blocks": 8192, 00:31:44.770 "uuid": "fdbe2a45-be76-4dd1-b833-92da6f61102b", 00:31:44.770 "md_size": 32, 00:31:44.770 "md_interleave": false, 00:31:44.770 "dif_type": 0, 00:31:44.770 "assigned_rate_limits": { 00:31:44.770 "rw_ios_per_sec": 0, 00:31:44.770 "rw_mbytes_per_sec": 0, 00:31:44.770 "r_mbytes_per_sec": 0, 00:31:44.770 "w_mbytes_per_sec": 0 00:31:44.770 }, 00:31:44.770 "claimed": true, 00:31:44.770 "claim_type": "exclusive_write", 00:31:44.770 "zoned": false, 00:31:44.770 "supported_io_types": { 00:31:44.770 "read": true, 00:31:44.770 "write": true, 00:31:44.770 "unmap": true, 00:31:44.770 "flush": true, 00:31:44.770 "reset": true, 00:31:44.770 "nvme_admin": false, 00:31:44.770 "nvme_io": false, 00:31:44.770 "nvme_io_md": false, 00:31:44.770 "write_zeroes": true, 00:31:44.770 "zcopy": true, 00:31:44.770 "get_zone_info": false, 00:31:44.770 "zone_management": false, 00:31:44.770 "zone_append": false, 00:31:44.770 "compare": false, 00:31:44.770 "compare_and_write": false, 00:31:44.770 "abort": true, 00:31:44.770 "seek_hole": false, 00:31:44.770 "seek_data": false, 00:31:44.770 "copy": true, 00:31:44.770 "nvme_iov_md": false 00:31:44.770 }, 00:31:44.770 "memory_domains": [ 00:31:44.770 { 00:31:44.770 "dma_device_id": "system", 00:31:44.770 "dma_device_type": 1 00:31:44.770 }, 00:31:44.770 { 00:31:44.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:44.770 "dma_device_type": 2 00:31:44.770 } 00:31:44.770 ], 00:31:44.770 "driver_specific": {} 00:31:44.770 } 00:31:44.770 ] 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:44.770 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.029 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:45.029 "name": "Existed_Raid", 00:31:45.029 "uuid": "f6a210f7-4df5-4e06-aaff-1b75a6c88417", 00:31:45.029 "strip_size_kb": 0, 00:31:45.029 "state": "online", 00:31:45.029 "raid_level": "raid1", 00:31:45.029 "superblock": true, 00:31:45.029 "num_base_bdevs": 2, 00:31:45.029 "num_base_bdevs_discovered": 2, 00:31:45.029 "num_base_bdevs_operational": 2, 00:31:45.029 "base_bdevs_list": [ 00:31:45.029 { 00:31:45.029 "name": "BaseBdev1", 00:31:45.029 "uuid": "aed9cdc3-7674-404d-bcd3-11e04425d0b3", 00:31:45.029 "is_configured": true, 00:31:45.029 "data_offset": 256, 00:31:45.029 "data_size": 7936 00:31:45.029 }, 00:31:45.029 { 00:31:45.029 "name": "BaseBdev2", 00:31:45.029 "uuid": "fdbe2a45-be76-4dd1-b833-92da6f61102b", 00:31:45.029 "is_configured": true, 00:31:45.029 "data_offset": 256, 00:31:45.029 "data_size": 7936 00:31:45.029 } 00:31:45.029 ] 00:31:45.029 }' 00:31:45.029 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:45.029 11:43:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:45.964 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:45.964 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:45.964 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:45.964 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:45.964 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:45.964 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:31:45.964 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:45.964 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:45.964 [2024-07-25 11:43:01.731305] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:45.964 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:45.964 "name": "Existed_Raid", 00:31:45.964 "aliases": [ 00:31:45.964 "f6a210f7-4df5-4e06-aaff-1b75a6c88417" 00:31:45.964 ], 00:31:45.964 "product_name": "Raid Volume", 00:31:45.964 "block_size": 4096, 00:31:45.964 "num_blocks": 7936, 00:31:45.964 "uuid": "f6a210f7-4df5-4e06-aaff-1b75a6c88417", 00:31:45.964 "md_size": 32, 00:31:45.964 "md_interleave": false, 00:31:45.964 "dif_type": 0, 00:31:45.964 "assigned_rate_limits": { 00:31:45.964 "rw_ios_per_sec": 0, 00:31:45.964 "rw_mbytes_per_sec": 0, 00:31:45.964 "r_mbytes_per_sec": 0, 00:31:45.964 "w_mbytes_per_sec": 0 00:31:45.964 }, 00:31:45.964 "claimed": false, 00:31:45.964 "zoned": false, 00:31:45.964 "supported_io_types": { 00:31:45.964 "read": true, 00:31:45.964 "write": true, 00:31:45.964 "unmap": false, 00:31:45.964 "flush": false, 00:31:45.964 "reset": true, 00:31:45.964 "nvme_admin": false, 00:31:45.964 "nvme_io": false, 00:31:45.964 "nvme_io_md": false, 00:31:45.964 "write_zeroes": true, 00:31:45.964 "zcopy": false, 00:31:45.964 "get_zone_info": false, 00:31:45.964 "zone_management": false, 00:31:45.964 "zone_append": false, 00:31:45.964 "compare": false, 00:31:45.964 "compare_and_write": false, 00:31:45.964 "abort": false, 00:31:45.964 "seek_hole": false, 00:31:45.964 "seek_data": false, 00:31:45.964 "copy": false, 00:31:45.964 "nvme_iov_md": false 00:31:45.964 }, 00:31:45.964 "memory_domains": [ 00:31:45.964 { 00:31:45.964 "dma_device_id": "system", 00:31:45.964 "dma_device_type": 1 00:31:45.964 }, 00:31:45.964 { 00:31:45.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.964 "dma_device_type": 2 00:31:45.964 }, 00:31:45.964 { 00:31:45.964 "dma_device_id": "system", 00:31:45.964 "dma_device_type": 1 00:31:45.964 }, 00:31:45.964 { 00:31:45.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.964 "dma_device_type": 2 00:31:45.964 } 00:31:45.964 ], 00:31:45.964 "driver_specific": { 00:31:45.964 "raid": { 00:31:45.964 "uuid": "f6a210f7-4df5-4e06-aaff-1b75a6c88417", 00:31:45.964 "strip_size_kb": 0, 00:31:45.964 "state": "online", 00:31:45.964 "raid_level": "raid1", 00:31:45.964 "superblock": true, 00:31:45.964 "num_base_bdevs": 2, 00:31:45.964 "num_base_bdevs_discovered": 2, 00:31:45.964 "num_base_bdevs_operational": 2, 00:31:45.964 "base_bdevs_list": [ 00:31:45.964 { 00:31:45.964 "name": "BaseBdev1", 00:31:45.964 "uuid": "aed9cdc3-7674-404d-bcd3-11e04425d0b3", 00:31:45.965 "is_configured": true, 00:31:45.965 "data_offset": 256, 00:31:45.965 "data_size": 7936 00:31:45.965 }, 00:31:45.965 { 00:31:45.965 "name": "BaseBdev2", 00:31:45.965 "uuid": "fdbe2a45-be76-4dd1-b833-92da6f61102b", 00:31:45.965 "is_configured": true, 00:31:45.965 "data_offset": 256, 00:31:45.965 "data_size": 7936 00:31:45.965 } 00:31:45.965 ] 00:31:45.965 } 00:31:45.965 } 00:31:45.965 }' 00:31:45.965 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:45.965 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:45.965 BaseBdev2' 00:31:45.965 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:45.965 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:45.965 11:43:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:46.224 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:46.224 "name": "BaseBdev1", 00:31:46.224 "aliases": [ 00:31:46.224 "aed9cdc3-7674-404d-bcd3-11e04425d0b3" 00:31:46.224 ], 00:31:46.224 "product_name": "Malloc disk", 00:31:46.224 "block_size": 4096, 00:31:46.224 "num_blocks": 8192, 00:31:46.224 "uuid": "aed9cdc3-7674-404d-bcd3-11e04425d0b3", 00:31:46.224 "md_size": 32, 00:31:46.224 "md_interleave": false, 00:31:46.224 "dif_type": 0, 00:31:46.224 "assigned_rate_limits": { 00:31:46.224 "rw_ios_per_sec": 0, 00:31:46.224 "rw_mbytes_per_sec": 0, 00:31:46.224 "r_mbytes_per_sec": 0, 00:31:46.224 "w_mbytes_per_sec": 0 00:31:46.224 }, 00:31:46.224 "claimed": true, 00:31:46.224 "claim_type": "exclusive_write", 00:31:46.224 "zoned": false, 00:31:46.224 "supported_io_types": { 00:31:46.224 "read": true, 00:31:46.224 "write": true, 00:31:46.224 "unmap": true, 00:31:46.224 "flush": true, 00:31:46.224 "reset": true, 00:31:46.224 "nvme_admin": false, 00:31:46.224 "nvme_io": false, 00:31:46.224 "nvme_io_md": false, 00:31:46.224 "write_zeroes": true, 00:31:46.224 "zcopy": true, 00:31:46.224 "get_zone_info": false, 00:31:46.224 "zone_management": false, 00:31:46.224 "zone_append": false, 00:31:46.224 "compare": false, 00:31:46.224 "compare_and_write": false, 00:31:46.224 "abort": true, 00:31:46.224 "seek_hole": false, 00:31:46.224 "seek_data": false, 00:31:46.224 "copy": true, 00:31:46.224 "nvme_iov_md": false 00:31:46.224 }, 00:31:46.224 "memory_domains": [ 00:31:46.224 { 00:31:46.224 "dma_device_id": "system", 00:31:46.224 "dma_device_type": 1 00:31:46.224 }, 00:31:46.224 { 00:31:46.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:46.224 "dma_device_type": 2 00:31:46.224 } 00:31:46.224 ], 00:31:46.224 "driver_specific": {} 00:31:46.224 }' 00:31:46.224 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:46.482 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:46.482 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:46.482 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:46.482 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:46.482 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:46.482 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:46.482 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:46.741 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:46.741 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:46.741 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:46.741 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:46.741 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:46.741 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:46.741 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:46.999 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:46.999 "name": "BaseBdev2", 00:31:46.999 "aliases": [ 00:31:46.999 "fdbe2a45-be76-4dd1-b833-92da6f61102b" 00:31:46.999 ], 00:31:46.999 "product_name": "Malloc disk", 00:31:46.999 "block_size": 4096, 00:31:46.999 "num_blocks": 8192, 00:31:46.999 "uuid": "fdbe2a45-be76-4dd1-b833-92da6f61102b", 00:31:46.999 "md_size": 32, 00:31:46.999 "md_interleave": false, 00:31:46.999 "dif_type": 0, 00:31:46.999 "assigned_rate_limits": { 00:31:46.999 "rw_ios_per_sec": 0, 00:31:46.999 "rw_mbytes_per_sec": 0, 00:31:46.999 "r_mbytes_per_sec": 0, 00:31:46.999 "w_mbytes_per_sec": 0 00:31:46.999 }, 00:31:46.999 "claimed": true, 00:31:46.999 "claim_type": "exclusive_write", 00:31:46.999 "zoned": false, 00:31:46.999 "supported_io_types": { 00:31:46.999 "read": true, 00:31:46.999 "write": true, 00:31:46.999 "unmap": true, 00:31:46.999 "flush": true, 00:31:46.999 "reset": true, 00:31:46.999 "nvme_admin": false, 00:31:46.999 "nvme_io": false, 00:31:46.999 "nvme_io_md": false, 00:31:46.999 "write_zeroes": true, 00:31:46.999 "zcopy": true, 00:31:46.999 "get_zone_info": false, 00:31:46.999 "zone_management": false, 00:31:46.999 "zone_append": false, 00:31:46.999 "compare": false, 00:31:46.999 "compare_and_write": false, 00:31:46.999 "abort": true, 00:31:46.999 "seek_hole": false, 00:31:46.999 "seek_data": false, 00:31:46.999 "copy": true, 00:31:46.999 "nvme_iov_md": false 00:31:46.999 }, 00:31:46.999 "memory_domains": [ 00:31:46.999 { 00:31:46.999 "dma_device_id": "system", 00:31:46.999 "dma_device_type": 1 00:31:47.000 }, 00:31:47.000 { 00:31:47.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.000 "dma_device_type": 2 00:31:47.000 } 00:31:47.000 ], 00:31:47.000 "driver_specific": {} 00:31:47.000 }' 00:31:47.000 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:47.000 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:47.000 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:47.000 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:47.327 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:47.327 11:43:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:47.327 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:47.327 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:47.327 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:47.327 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:47.327 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:47.587 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:47.587 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:47.587 [2024-07-25 11:43:03.415535] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:47.845 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:47.846 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:47.846 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.104 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:48.104 "name": "Existed_Raid", 00:31:48.104 "uuid": "f6a210f7-4df5-4e06-aaff-1b75a6c88417", 00:31:48.104 "strip_size_kb": 0, 00:31:48.104 "state": "online", 00:31:48.104 "raid_level": "raid1", 00:31:48.104 "superblock": true, 00:31:48.104 "num_base_bdevs": 2, 00:31:48.104 "num_base_bdevs_discovered": 1, 00:31:48.104 "num_base_bdevs_operational": 1, 00:31:48.104 "base_bdevs_list": [ 00:31:48.104 { 00:31:48.104 "name": null, 00:31:48.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.104 "is_configured": false, 00:31:48.104 "data_offset": 256, 00:31:48.104 "data_size": 7936 00:31:48.104 }, 00:31:48.104 { 00:31:48.104 "name": "BaseBdev2", 00:31:48.104 "uuid": "fdbe2a45-be76-4dd1-b833-92da6f61102b", 00:31:48.104 "is_configured": true, 00:31:48.104 "data_offset": 256, 00:31:48.104 "data_size": 7936 00:31:48.104 } 00:31:48.104 ] 00:31:48.104 }' 00:31:48.104 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:48.104 11:43:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:48.669 11:43:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:48.669 11:43:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:48.669 11:43:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.669 11:43:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:48.928 11:43:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:48.928 11:43:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:48.928 11:43:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:49.186 [2024-07-25 11:43:04.962956] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:49.186 [2024-07-25 11:43:04.963116] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:49.186 [2024-07-25 11:43:05.056264] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:49.186 [2024-07-25 11:43:05.056351] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:49.186 [2024-07-25 11:43:05.056368] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:49.444 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:49.444 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:49.444 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:49.444 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 101697 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 101697 ']' 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 101697 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101697 00:31:49.701 killing process with pid 101697 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101697' 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 101697 00:31:49.701 [2024-07-25 11:43:05.403254] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:49.701 11:43:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 101697 00:31:49.701 [2024-07-25 11:43:05.417769] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:51.080 11:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:31:51.080 00:31:51.080 real 0m12.493s 00:31:51.080 user 0m21.704s 00:31:51.080 sys 0m1.622s 00:31:51.080 ************************************ 00:31:51.080 END TEST raid_state_function_test_sb_md_separate 00:31:51.080 ************************************ 00:31:51.080 11:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:51.080 11:43:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:51.080 11:43:06 bdev_raid -- bdev/bdev_raid.sh@984 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:31:51.080 11:43:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:51.080 11:43:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:51.080 11:43:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:51.080 ************************************ 00:31:51.080 START TEST raid_superblock_test_md_separate 00:31:51.080 ************************************ 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@414 -- # local strip_size 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@427 -- # raid_pid=102054 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@428 -- # waitforlisten 102054 /var/tmp/spdk-raid.sock 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 102054 ']' 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:51.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:51.080 11:43:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:51.080 [2024-07-25 11:43:06.746375] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:31:51.080 [2024-07-25 11:43:06.746875] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102054 ] 00:31:51.080 [2024-07-25 11:43:06.918334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.338 [2024-07-25 11:43:07.157420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.596 [2024-07-25 11:43:07.357889] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:51.596 [2024-07-25 11:43:07.357944] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:51.855 11:43:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:31:52.112 malloc1 00:31:52.112 11:43:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:52.370 [2024-07-25 11:43:08.178324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:52.370 [2024-07-25 11:43:08.178424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:52.370 [2024-07-25 11:43:08.178458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:52.370 [2024-07-25 11:43:08.178478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:52.370 [2024-07-25 11:43:08.181124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:52.370 [2024-07-25 11:43:08.181175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:52.370 pt1 00:31:52.370 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:31:52.370 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:52.370 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:31:52.370 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:31:52.370 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:52.370 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:52.370 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:31:52.370 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:52.370 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:31:52.628 malloc2 00:31:52.628 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:52.886 [2024-07-25 11:43:08.735353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:52.886 [2024-07-25 11:43:08.735457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:52.886 [2024-07-25 11:43:08.735489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:52.886 [2024-07-25 11:43:08.735511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:52.886 [2024-07-25 11:43:08.738032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:52.886 [2024-07-25 11:43:08.738082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:52.886 pt2 00:31:52.886 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:31:52.886 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:52.886 11:43:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:31:53.452 [2024-07-25 11:43:09.031512] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:53.452 [2024-07-25 11:43:09.034115] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:53.452 [2024-07-25 11:43:09.034503] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:53.452 [2024-07-25 11:43:09.034659] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:53.452 [2024-07-25 11:43:09.034837] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:53.452 [2024-07-25 11:43:09.035178] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:53.452 [2024-07-25 11:43:09.035310] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:53.452 [2024-07-25 11:43:09.035670] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:53.452 "name": "raid_bdev1", 00:31:53.452 "uuid": "11f09d3d-71f1-4dba-b756-336aa249f94e", 00:31:53.452 "strip_size_kb": 0, 00:31:53.452 "state": "online", 00:31:53.452 "raid_level": "raid1", 00:31:53.452 "superblock": true, 00:31:53.452 "num_base_bdevs": 2, 00:31:53.452 "num_base_bdevs_discovered": 2, 00:31:53.452 "num_base_bdevs_operational": 2, 00:31:53.452 "base_bdevs_list": [ 00:31:53.452 { 00:31:53.452 "name": "pt1", 00:31:53.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:53.452 "is_configured": true, 00:31:53.452 "data_offset": 256, 00:31:53.452 "data_size": 7936 00:31:53.452 }, 00:31:53.452 { 00:31:53.452 "name": "pt2", 00:31:53.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:53.452 "is_configured": true, 00:31:53.452 "data_offset": 256, 00:31:53.452 "data_size": 7936 00:31:53.452 } 00:31:53.452 ] 00:31:53.452 }' 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:53.452 11:43:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:54.384 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:31:54.384 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:54.384 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:54.384 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:54.384 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:54.384 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:31:54.384 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:54.384 11:43:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:54.384 [2024-07-25 11:43:10.164359] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:54.384 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:54.384 "name": "raid_bdev1", 00:31:54.384 "aliases": [ 00:31:54.384 "11f09d3d-71f1-4dba-b756-336aa249f94e" 00:31:54.384 ], 00:31:54.384 "product_name": "Raid Volume", 00:31:54.384 "block_size": 4096, 00:31:54.385 "num_blocks": 7936, 00:31:54.385 "uuid": "11f09d3d-71f1-4dba-b756-336aa249f94e", 00:31:54.385 "md_size": 32, 00:31:54.385 "md_interleave": false, 00:31:54.385 "dif_type": 0, 00:31:54.385 "assigned_rate_limits": { 00:31:54.385 "rw_ios_per_sec": 0, 00:31:54.385 "rw_mbytes_per_sec": 0, 00:31:54.385 "r_mbytes_per_sec": 0, 00:31:54.385 "w_mbytes_per_sec": 0 00:31:54.385 }, 00:31:54.385 "claimed": false, 00:31:54.385 "zoned": false, 00:31:54.385 "supported_io_types": { 00:31:54.385 "read": true, 00:31:54.385 "write": true, 00:31:54.385 "unmap": false, 00:31:54.385 "flush": false, 00:31:54.385 "reset": true, 00:31:54.385 "nvme_admin": false, 00:31:54.385 "nvme_io": false, 00:31:54.385 "nvme_io_md": false, 00:31:54.385 "write_zeroes": true, 00:31:54.385 "zcopy": false, 00:31:54.385 "get_zone_info": false, 00:31:54.385 "zone_management": false, 00:31:54.385 "zone_append": false, 00:31:54.385 "compare": false, 00:31:54.385 "compare_and_write": false, 00:31:54.385 "abort": false, 00:31:54.385 "seek_hole": false, 00:31:54.385 "seek_data": false, 00:31:54.385 "copy": false, 00:31:54.385 "nvme_iov_md": false 00:31:54.385 }, 00:31:54.385 "memory_domains": [ 00:31:54.385 { 00:31:54.385 "dma_device_id": "system", 00:31:54.385 "dma_device_type": 1 00:31:54.385 }, 00:31:54.385 { 00:31:54.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.385 "dma_device_type": 2 00:31:54.385 }, 00:31:54.385 { 00:31:54.385 "dma_device_id": "system", 00:31:54.385 "dma_device_type": 1 00:31:54.385 }, 00:31:54.385 { 00:31:54.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.385 "dma_device_type": 2 00:31:54.385 } 00:31:54.385 ], 00:31:54.385 "driver_specific": { 00:31:54.385 "raid": { 00:31:54.385 "uuid": "11f09d3d-71f1-4dba-b756-336aa249f94e", 00:31:54.385 "strip_size_kb": 0, 00:31:54.385 "state": "online", 00:31:54.385 "raid_level": "raid1", 00:31:54.385 "superblock": true, 00:31:54.385 "num_base_bdevs": 2, 00:31:54.385 "num_base_bdevs_discovered": 2, 00:31:54.385 "num_base_bdevs_operational": 2, 00:31:54.385 "base_bdevs_list": [ 00:31:54.385 { 00:31:54.385 "name": "pt1", 00:31:54.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:54.385 "is_configured": true, 00:31:54.385 "data_offset": 256, 00:31:54.385 "data_size": 7936 00:31:54.385 }, 00:31:54.385 { 00:31:54.385 "name": "pt2", 00:31:54.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:54.385 "is_configured": true, 00:31:54.385 "data_offset": 256, 00:31:54.385 "data_size": 7936 00:31:54.385 } 00:31:54.385 ] 00:31:54.385 } 00:31:54.385 } 00:31:54.385 }' 00:31:54.385 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:54.385 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:54.385 pt2' 00:31:54.385 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:54.385 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:54.385 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:54.643 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:54.643 "name": "pt1", 00:31:54.643 "aliases": [ 00:31:54.643 "00000000-0000-0000-0000-000000000001" 00:31:54.643 ], 00:31:54.643 "product_name": "passthru", 00:31:54.643 "block_size": 4096, 00:31:54.643 "num_blocks": 8192, 00:31:54.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:54.643 "md_size": 32, 00:31:54.643 "md_interleave": false, 00:31:54.643 "dif_type": 0, 00:31:54.643 "assigned_rate_limits": { 00:31:54.643 "rw_ios_per_sec": 0, 00:31:54.643 "rw_mbytes_per_sec": 0, 00:31:54.643 "r_mbytes_per_sec": 0, 00:31:54.643 "w_mbytes_per_sec": 0 00:31:54.643 }, 00:31:54.643 "claimed": true, 00:31:54.643 "claim_type": "exclusive_write", 00:31:54.643 "zoned": false, 00:31:54.643 "supported_io_types": { 00:31:54.643 "read": true, 00:31:54.643 "write": true, 00:31:54.643 "unmap": true, 00:31:54.643 "flush": true, 00:31:54.643 "reset": true, 00:31:54.643 "nvme_admin": false, 00:31:54.643 "nvme_io": false, 00:31:54.643 "nvme_io_md": false, 00:31:54.643 "write_zeroes": true, 00:31:54.643 "zcopy": true, 00:31:54.643 "get_zone_info": false, 00:31:54.643 "zone_management": false, 00:31:54.643 "zone_append": false, 00:31:54.643 "compare": false, 00:31:54.643 "compare_and_write": false, 00:31:54.643 "abort": true, 00:31:54.643 "seek_hole": false, 00:31:54.643 "seek_data": false, 00:31:54.643 "copy": true, 00:31:54.643 "nvme_iov_md": false 00:31:54.643 }, 00:31:54.643 "memory_domains": [ 00:31:54.643 { 00:31:54.643 "dma_device_id": "system", 00:31:54.643 "dma_device_type": 1 00:31:54.643 }, 00:31:54.643 { 00:31:54.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.643 "dma_device_type": 2 00:31:54.643 } 00:31:54.643 ], 00:31:54.643 "driver_specific": { 00:31:54.643 "passthru": { 00:31:54.643 "name": "pt1", 00:31:54.643 "base_bdev_name": "malloc1" 00:31:54.643 } 00:31:54.643 } 00:31:54.643 }' 00:31:54.643 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:54.901 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:54.901 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:54.901 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:54.901 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:54.901 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:54.901 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:54.901 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:54.901 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:54.901 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:55.159 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:55.159 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:55.159 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:55.159 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:55.159 11:43:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:55.418 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:55.418 "name": "pt2", 00:31:55.418 "aliases": [ 00:31:55.418 "00000000-0000-0000-0000-000000000002" 00:31:55.418 ], 00:31:55.418 "product_name": "passthru", 00:31:55.418 "block_size": 4096, 00:31:55.418 "num_blocks": 8192, 00:31:55.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:55.418 "md_size": 32, 00:31:55.418 "md_interleave": false, 00:31:55.418 "dif_type": 0, 00:31:55.418 "assigned_rate_limits": { 00:31:55.418 "rw_ios_per_sec": 0, 00:31:55.418 "rw_mbytes_per_sec": 0, 00:31:55.418 "r_mbytes_per_sec": 0, 00:31:55.418 "w_mbytes_per_sec": 0 00:31:55.418 }, 00:31:55.418 "claimed": true, 00:31:55.418 "claim_type": "exclusive_write", 00:31:55.418 "zoned": false, 00:31:55.418 "supported_io_types": { 00:31:55.418 "read": true, 00:31:55.418 "write": true, 00:31:55.418 "unmap": true, 00:31:55.418 "flush": true, 00:31:55.418 "reset": true, 00:31:55.418 "nvme_admin": false, 00:31:55.418 "nvme_io": false, 00:31:55.418 "nvme_io_md": false, 00:31:55.418 "write_zeroes": true, 00:31:55.418 "zcopy": true, 00:31:55.418 "get_zone_info": false, 00:31:55.418 "zone_management": false, 00:31:55.418 "zone_append": false, 00:31:55.418 "compare": false, 00:31:55.418 "compare_and_write": false, 00:31:55.418 "abort": true, 00:31:55.418 "seek_hole": false, 00:31:55.418 "seek_data": false, 00:31:55.418 "copy": true, 00:31:55.418 "nvme_iov_md": false 00:31:55.418 }, 00:31:55.418 "memory_domains": [ 00:31:55.418 { 00:31:55.418 "dma_device_id": "system", 00:31:55.418 "dma_device_type": 1 00:31:55.418 }, 00:31:55.418 { 00:31:55.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:55.418 "dma_device_type": 2 00:31:55.418 } 00:31:55.418 ], 00:31:55.418 "driver_specific": { 00:31:55.418 "passthru": { 00:31:55.418 "name": "pt2", 00:31:55.418 "base_bdev_name": "malloc2" 00:31:55.418 } 00:31:55.418 } 00:31:55.418 }' 00:31:55.418 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:55.418 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:55.418 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:55.418 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:55.418 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:55.676 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:31:55.676 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:55.676 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:55.676 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:31:55.676 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:55.676 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:55.676 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:31:55.676 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:31:55.676 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:55.934 [2024-07-25 11:43:11.800909] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:56.192 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=11f09d3d-71f1-4dba-b756-336aa249f94e 00:31:56.192 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' -z 11f09d3d-71f1-4dba-b756-336aa249f94e ']' 00:31:56.192 11:43:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:56.451 [2024-07-25 11:43:12.104546] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:56.451 [2024-07-25 11:43:12.104596] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:56.451 [2024-07-25 11:43:12.104717] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:56.451 [2024-07-25 11:43:12.104804] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:56.451 [2024-07-25 11:43:12.104819] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:56.451 11:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.451 11:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:31:56.710 11:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:31:56.710 11:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:31:56.710 11:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:31:56.710 11:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:56.967 11:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:31:56.967 11:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:57.226 11:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:57.226 11:43:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:57.484 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:57.484 [2024-07-25 11:43:13.352913] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:57.484 [2024-07-25 11:43:13.355385] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:57.484 [2024-07-25 11:43:13.355483] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:57.484 [2024-07-25 11:43:13.355570] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:57.484 [2024-07-25 11:43:13.355598] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:57.484 [2024-07-25 11:43:13.355610] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:31:57.484 request: 00:31:57.484 { 00:31:57.484 "name": "raid_bdev1", 00:31:57.484 "raid_level": "raid1", 00:31:57.484 "base_bdevs": [ 00:31:57.484 "malloc1", 00:31:57.484 "malloc2" 00:31:57.484 ], 00:31:57.484 "superblock": false, 00:31:57.484 "method": "bdev_raid_create", 00:31:57.484 "req_id": 1 00:31:57.484 } 00:31:57.484 Got JSON-RPC error response 00:31:57.484 response: 00:31:57.484 { 00:31:57.484 "code": -17, 00:31:57.484 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:57.484 } 00:31:57.742 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:31:57.742 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:57.742 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:57.742 11:43:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:57.742 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.742 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:31:58.000 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:31:58.000 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:31:58.000 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:58.000 [2024-07-25 11:43:13.880957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:58.000 [2024-07-25 11:43:13.881288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:58.000 [2024-07-25 11:43:13.881367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:58.000 [2024-07-25 11:43:13.881551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:58.259 [2024-07-25 11:43:13.884167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:58.259 [2024-07-25 11:43:13.884341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:58.259 [2024-07-25 11:43:13.884542] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:58.259 [2024-07-25 11:43:13.884751] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:58.259 pt1 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.259 11:43:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.259 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:58.259 "name": "raid_bdev1", 00:31:58.259 "uuid": "11f09d3d-71f1-4dba-b756-336aa249f94e", 00:31:58.259 "strip_size_kb": 0, 00:31:58.259 "state": "configuring", 00:31:58.259 "raid_level": "raid1", 00:31:58.259 "superblock": true, 00:31:58.259 "num_base_bdevs": 2, 00:31:58.259 "num_base_bdevs_discovered": 1, 00:31:58.259 "num_base_bdevs_operational": 2, 00:31:58.259 "base_bdevs_list": [ 00:31:58.259 { 00:31:58.259 "name": "pt1", 00:31:58.259 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:58.259 "is_configured": true, 00:31:58.259 "data_offset": 256, 00:31:58.259 "data_size": 7936 00:31:58.259 }, 00:31:58.259 { 00:31:58.259 "name": null, 00:31:58.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:58.259 "is_configured": false, 00:31:58.259 "data_offset": 256, 00:31:58.259 "data_size": 7936 00:31:58.259 } 00:31:58.259 ] 00:31:58.259 }' 00:31:58.259 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:58.259 11:43:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:59.195 [2024-07-25 11:43:14.969460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:59.195 [2024-07-25 11:43:14.969566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:59.195 [2024-07-25 11:43:14.969601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:59.195 [2024-07-25 11:43:14.969615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:59.195 [2024-07-25 11:43:14.969973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:59.195 [2024-07-25 11:43:14.969997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:59.195 [2024-07-25 11:43:14.970073] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:59.195 [2024-07-25 11:43:14.970103] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:59.195 [2024-07-25 11:43:14.970260] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:59.195 [2024-07-25 11:43:14.970282] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:59.195 [2024-07-25 11:43:14.970363] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:59.195 [2024-07-25 11:43:14.970508] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:59.195 [2024-07-25 11:43:14.970529] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:31:59.195 [2024-07-25 11:43:14.970667] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:59.195 pt2 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.195 11:43:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.453 11:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:59.453 "name": "raid_bdev1", 00:31:59.453 "uuid": "11f09d3d-71f1-4dba-b756-336aa249f94e", 00:31:59.453 "strip_size_kb": 0, 00:31:59.453 "state": "online", 00:31:59.453 "raid_level": "raid1", 00:31:59.453 "superblock": true, 00:31:59.453 "num_base_bdevs": 2, 00:31:59.453 "num_base_bdevs_discovered": 2, 00:31:59.453 "num_base_bdevs_operational": 2, 00:31:59.453 "base_bdevs_list": [ 00:31:59.453 { 00:31:59.453 "name": "pt1", 00:31:59.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:59.453 "is_configured": true, 00:31:59.453 "data_offset": 256, 00:31:59.453 "data_size": 7936 00:31:59.453 }, 00:31:59.453 { 00:31:59.453 "name": "pt2", 00:31:59.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:59.453 "is_configured": true, 00:31:59.453 "data_offset": 256, 00:31:59.453 "data_size": 7936 00:31:59.453 } 00:31:59.453 ] 00:31:59.453 }' 00:31:59.453 11:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:59.453 11:43:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:00.430 11:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:32:00.430 11:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:00.430 11:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:00.430 11:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:00.430 11:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:00.430 11:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:00.430 11:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:00.430 11:43:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:00.430 [2024-07-25 11:43:16.186121] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:00.430 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:00.430 "name": "raid_bdev1", 00:32:00.430 "aliases": [ 00:32:00.430 "11f09d3d-71f1-4dba-b756-336aa249f94e" 00:32:00.430 ], 00:32:00.430 "product_name": "Raid Volume", 00:32:00.430 "block_size": 4096, 00:32:00.430 "num_blocks": 7936, 00:32:00.430 "uuid": "11f09d3d-71f1-4dba-b756-336aa249f94e", 00:32:00.430 "md_size": 32, 00:32:00.430 "md_interleave": false, 00:32:00.430 "dif_type": 0, 00:32:00.430 "assigned_rate_limits": { 00:32:00.430 "rw_ios_per_sec": 0, 00:32:00.430 "rw_mbytes_per_sec": 0, 00:32:00.430 "r_mbytes_per_sec": 0, 00:32:00.430 "w_mbytes_per_sec": 0 00:32:00.430 }, 00:32:00.430 "claimed": false, 00:32:00.430 "zoned": false, 00:32:00.430 "supported_io_types": { 00:32:00.430 "read": true, 00:32:00.430 "write": true, 00:32:00.430 "unmap": false, 00:32:00.430 "flush": false, 00:32:00.430 "reset": true, 00:32:00.430 "nvme_admin": false, 00:32:00.430 "nvme_io": false, 00:32:00.430 "nvme_io_md": false, 00:32:00.430 "write_zeroes": true, 00:32:00.430 "zcopy": false, 00:32:00.430 "get_zone_info": false, 00:32:00.430 "zone_management": false, 00:32:00.430 "zone_append": false, 00:32:00.430 "compare": false, 00:32:00.430 "compare_and_write": false, 00:32:00.430 "abort": false, 00:32:00.430 "seek_hole": false, 00:32:00.430 "seek_data": false, 00:32:00.430 "copy": false, 00:32:00.430 "nvme_iov_md": false 00:32:00.430 }, 00:32:00.430 "memory_domains": [ 00:32:00.430 { 00:32:00.430 "dma_device_id": "system", 00:32:00.430 "dma_device_type": 1 00:32:00.430 }, 00:32:00.430 { 00:32:00.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.430 "dma_device_type": 2 00:32:00.430 }, 00:32:00.430 { 00:32:00.430 "dma_device_id": "system", 00:32:00.430 "dma_device_type": 1 00:32:00.430 }, 00:32:00.430 { 00:32:00.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.430 "dma_device_type": 2 00:32:00.430 } 00:32:00.430 ], 00:32:00.430 "driver_specific": { 00:32:00.430 "raid": { 00:32:00.430 "uuid": "11f09d3d-71f1-4dba-b756-336aa249f94e", 00:32:00.430 "strip_size_kb": 0, 00:32:00.430 "state": "online", 00:32:00.430 "raid_level": "raid1", 00:32:00.430 "superblock": true, 00:32:00.430 "num_base_bdevs": 2, 00:32:00.430 "num_base_bdevs_discovered": 2, 00:32:00.430 "num_base_bdevs_operational": 2, 00:32:00.430 "base_bdevs_list": [ 00:32:00.430 { 00:32:00.430 "name": "pt1", 00:32:00.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:00.430 "is_configured": true, 00:32:00.430 "data_offset": 256, 00:32:00.430 "data_size": 7936 00:32:00.430 }, 00:32:00.430 { 00:32:00.430 "name": "pt2", 00:32:00.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:00.430 "is_configured": true, 00:32:00.430 "data_offset": 256, 00:32:00.430 "data_size": 7936 00:32:00.430 } 00:32:00.430 ] 00:32:00.430 } 00:32:00.430 } 00:32:00.430 }' 00:32:00.430 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:00.430 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:00.430 pt2' 00:32:00.430 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:00.430 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:00.430 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:00.689 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:00.689 "name": "pt1", 00:32:00.689 "aliases": [ 00:32:00.689 "00000000-0000-0000-0000-000000000001" 00:32:00.689 ], 00:32:00.689 "product_name": "passthru", 00:32:00.689 "block_size": 4096, 00:32:00.689 "num_blocks": 8192, 00:32:00.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:00.690 "md_size": 32, 00:32:00.690 "md_interleave": false, 00:32:00.690 "dif_type": 0, 00:32:00.690 "assigned_rate_limits": { 00:32:00.690 "rw_ios_per_sec": 0, 00:32:00.690 "rw_mbytes_per_sec": 0, 00:32:00.690 "r_mbytes_per_sec": 0, 00:32:00.690 "w_mbytes_per_sec": 0 00:32:00.690 }, 00:32:00.690 "claimed": true, 00:32:00.690 "claim_type": "exclusive_write", 00:32:00.690 "zoned": false, 00:32:00.690 "supported_io_types": { 00:32:00.690 "read": true, 00:32:00.690 "write": true, 00:32:00.690 "unmap": true, 00:32:00.690 "flush": true, 00:32:00.690 "reset": true, 00:32:00.690 "nvme_admin": false, 00:32:00.690 "nvme_io": false, 00:32:00.690 "nvme_io_md": false, 00:32:00.690 "write_zeroes": true, 00:32:00.690 "zcopy": true, 00:32:00.690 "get_zone_info": false, 00:32:00.690 "zone_management": false, 00:32:00.690 "zone_append": false, 00:32:00.690 "compare": false, 00:32:00.690 "compare_and_write": false, 00:32:00.690 "abort": true, 00:32:00.690 "seek_hole": false, 00:32:00.690 "seek_data": false, 00:32:00.690 "copy": true, 00:32:00.690 "nvme_iov_md": false 00:32:00.690 }, 00:32:00.690 "memory_domains": [ 00:32:00.690 { 00:32:00.690 "dma_device_id": "system", 00:32:00.690 "dma_device_type": 1 00:32:00.690 }, 00:32:00.690 { 00:32:00.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.690 "dma_device_type": 2 00:32:00.690 } 00:32:00.690 ], 00:32:00.690 "driver_specific": { 00:32:00.690 "passthru": { 00:32:00.690 "name": "pt1", 00:32:00.690 "base_bdev_name": "malloc1" 00:32:00.690 } 00:32:00.690 } 00:32:00.690 }' 00:32:00.690 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:00.690 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:00.948 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:00.948 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:00.948 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:00.948 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:00.948 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:00.948 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:00.948 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:00.948 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.206 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.206 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:01.206 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:01.206 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:01.206 11:43:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:01.466 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:01.466 "name": "pt2", 00:32:01.466 "aliases": [ 00:32:01.466 "00000000-0000-0000-0000-000000000002" 00:32:01.466 ], 00:32:01.466 "product_name": "passthru", 00:32:01.466 "block_size": 4096, 00:32:01.466 "num_blocks": 8192, 00:32:01.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:01.466 "md_size": 32, 00:32:01.466 "md_interleave": false, 00:32:01.466 "dif_type": 0, 00:32:01.466 "assigned_rate_limits": { 00:32:01.466 "rw_ios_per_sec": 0, 00:32:01.466 "rw_mbytes_per_sec": 0, 00:32:01.466 "r_mbytes_per_sec": 0, 00:32:01.466 "w_mbytes_per_sec": 0 00:32:01.466 }, 00:32:01.466 "claimed": true, 00:32:01.466 "claim_type": "exclusive_write", 00:32:01.466 "zoned": false, 00:32:01.466 "supported_io_types": { 00:32:01.466 "read": true, 00:32:01.466 "write": true, 00:32:01.466 "unmap": true, 00:32:01.466 "flush": true, 00:32:01.466 "reset": true, 00:32:01.466 "nvme_admin": false, 00:32:01.466 "nvme_io": false, 00:32:01.466 "nvme_io_md": false, 00:32:01.466 "write_zeroes": true, 00:32:01.466 "zcopy": true, 00:32:01.466 "get_zone_info": false, 00:32:01.466 "zone_management": false, 00:32:01.466 "zone_append": false, 00:32:01.466 "compare": false, 00:32:01.466 "compare_and_write": false, 00:32:01.466 "abort": true, 00:32:01.466 "seek_hole": false, 00:32:01.466 "seek_data": false, 00:32:01.466 "copy": true, 00:32:01.466 "nvme_iov_md": false 00:32:01.466 }, 00:32:01.466 "memory_domains": [ 00:32:01.466 { 00:32:01.466 "dma_device_id": "system", 00:32:01.466 "dma_device_type": 1 00:32:01.466 }, 00:32:01.466 { 00:32:01.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.466 "dma_device_type": 2 00:32:01.466 } 00:32:01.466 ], 00:32:01.466 "driver_specific": { 00:32:01.466 "passthru": { 00:32:01.466 "name": "pt2", 00:32:01.466 "base_bdev_name": "malloc2" 00:32:01.466 } 00:32:01.466 } 00:32:01.466 }' 00:32:01.466 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:01.466 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:01.466 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:01.466 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:01.466 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:01.466 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:01.466 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:01.725 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:01.725 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:01.725 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.725 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:01.725 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:01.725 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:32:01.725 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:01.984 [2024-07-25 11:43:17.790591] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:01.984 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # '[' 11f09d3d-71f1-4dba-b756-336aa249f94e '!=' 11f09d3d-71f1-4dba-b756-336aa249f94e ']' 00:32:01.984 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:32:01.984 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:01.984 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:32:01.984 11:43:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:02.242 [2024-07-25 11:43:18.022388] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.242 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.501 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:02.501 "name": "raid_bdev1", 00:32:02.501 "uuid": "11f09d3d-71f1-4dba-b756-336aa249f94e", 00:32:02.501 "strip_size_kb": 0, 00:32:02.501 "state": "online", 00:32:02.501 "raid_level": "raid1", 00:32:02.501 "superblock": true, 00:32:02.501 "num_base_bdevs": 2, 00:32:02.501 "num_base_bdevs_discovered": 1, 00:32:02.501 "num_base_bdevs_operational": 1, 00:32:02.501 "base_bdevs_list": [ 00:32:02.501 { 00:32:02.501 "name": null, 00:32:02.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.501 "is_configured": false, 00:32:02.501 "data_offset": 256, 00:32:02.501 "data_size": 7936 00:32:02.501 }, 00:32:02.501 { 00:32:02.501 "name": "pt2", 00:32:02.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:02.501 "is_configured": true, 00:32:02.501 "data_offset": 256, 00:32:02.501 "data_size": 7936 00:32:02.501 } 00:32:02.501 ] 00:32:02.501 }' 00:32:02.501 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:02.501 11:43:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:03.438 11:43:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:03.438 [2024-07-25 11:43:19.254664] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:03.438 [2024-07-25 11:43:19.254712] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:03.438 [2024-07-25 11:43:19.254804] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:03.438 [2024-07-25 11:43:19.254873] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:03.438 [2024-07-25 11:43:19.254888] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:03.438 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:32:03.438 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.721 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:32:03.721 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:32:03.721 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:32:03.721 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:03.721 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:03.995 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:03.995 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:03.995 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:32:03.995 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:32:03.995 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@534 -- # i=1 00:32:03.995 11:43:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:04.254 [2024-07-25 11:43:20.038821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:04.254 [2024-07-25 11:43:20.038912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:04.254 [2024-07-25 11:43:20.038944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:04.254 [2024-07-25 11:43:20.038959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:04.254 [2024-07-25 11:43:20.041789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:04.254 [2024-07-25 11:43:20.041834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:04.254 [2024-07-25 11:43:20.041911] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:04.254 [2024-07-25 11:43:20.041972] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:04.254 [2024-07-25 11:43:20.042105] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:04.254 [2024-07-25 11:43:20.042119] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:04.254 [2024-07-25 11:43:20.042216] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:04.254 [2024-07-25 11:43:20.042352] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:04.254 [2024-07-25 11:43:20.042371] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:32:04.254 [2024-07-25 11:43:20.042481] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:04.254 pt2 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.254 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.512 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:04.512 "name": "raid_bdev1", 00:32:04.512 "uuid": "11f09d3d-71f1-4dba-b756-336aa249f94e", 00:32:04.512 "strip_size_kb": 0, 00:32:04.512 "state": "online", 00:32:04.512 "raid_level": "raid1", 00:32:04.512 "superblock": true, 00:32:04.512 "num_base_bdevs": 2, 00:32:04.512 "num_base_bdevs_discovered": 1, 00:32:04.512 "num_base_bdevs_operational": 1, 00:32:04.512 "base_bdevs_list": [ 00:32:04.512 { 00:32:04.512 "name": null, 00:32:04.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.512 "is_configured": false, 00:32:04.512 "data_offset": 256, 00:32:04.512 "data_size": 7936 00:32:04.512 }, 00:32:04.512 { 00:32:04.512 "name": "pt2", 00:32:04.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:04.512 "is_configured": true, 00:32:04.512 "data_offset": 256, 00:32:04.512 "data_size": 7936 00:32:04.512 } 00:32:04.512 ] 00:32:04.512 }' 00:32:04.512 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:04.512 11:43:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:05.447 11:43:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:05.447 [2024-07-25 11:43:21.259083] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:05.447 [2024-07-25 11:43:21.259129] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:05.447 [2024-07-25 11:43:21.259229] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:05.447 [2024-07-25 11:43:21.259298] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:05.447 [2024-07-25 11:43:21.259317] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:32:05.447 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.447 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:32:05.705 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:32:05.705 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:32:05.705 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:32:05.705 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:05.964 [2024-07-25 11:43:21.779231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:05.964 [2024-07-25 11:43:21.779338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.964 [2024-07-25 11:43:21.779369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:05.964 [2024-07-25 11:43:21.779386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.964 [2024-07-25 11:43:21.781874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.964 [2024-07-25 11:43:21.781923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:05.964 [2024-07-25 11:43:21.781999] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:05.964 [2024-07-25 11:43:21.782068] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:05.964 pt1 00:32:05.964 [2024-07-25 11:43:21.782253] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:05.964 [2024-07-25 11:43:21.782281] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:05.964 [2024-07-25 11:43:21.782304] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:32:05.964 [2024-07-25 11:43:21.782385] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:05.964 [2024-07-25 11:43:21.782476] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:32:05.964 [2024-07-25 11:43:21.782499] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:05.964 [2024-07-25 11:43:21.782598] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:05.964 [2024-07-25 11:43:21.782759] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:32:05.964 [2024-07-25 11:43:21.782774] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:32:05.964 [2024-07-25 11:43:21.782900] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.964 11:43:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.223 11:43:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:06.223 "name": "raid_bdev1", 00:32:06.223 "uuid": "11f09d3d-71f1-4dba-b756-336aa249f94e", 00:32:06.223 "strip_size_kb": 0, 00:32:06.223 "state": "online", 00:32:06.223 "raid_level": "raid1", 00:32:06.223 "superblock": true, 00:32:06.223 "num_base_bdevs": 2, 00:32:06.223 "num_base_bdevs_discovered": 1, 00:32:06.223 "num_base_bdevs_operational": 1, 00:32:06.223 "base_bdevs_list": [ 00:32:06.223 { 00:32:06.223 "name": null, 00:32:06.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.223 "is_configured": false, 00:32:06.223 "data_offset": 256, 00:32:06.223 "data_size": 7936 00:32:06.223 }, 00:32:06.223 { 00:32:06.223 "name": "pt2", 00:32:06.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:06.223 "is_configured": true, 00:32:06.223 "data_offset": 256, 00:32:06.223 "data_size": 7936 00:32:06.223 } 00:32:06.223 ] 00:32:06.223 }' 00:32:06.223 11:43:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:06.223 11:43:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:06.820 11:43:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:32:06.820 11:43:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:07.112 11:43:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:32:07.112 11:43:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:32:07.112 11:43:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:07.372 [2024-07-25 11:43:23.159881] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # '[' 11f09d3d-71f1-4dba-b756-336aa249f94e '!=' 11f09d3d-71f1-4dba-b756-336aa249f94e ']' 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@578 -- # killprocess 102054 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 102054 ']' 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 102054 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102054 00:32:07.372 killing process with pid 102054 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102054' 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 102054 00:32:07.372 [2024-07-25 11:43:23.205244] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:07.372 11:43:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 102054 00:32:07.372 [2024-07-25 11:43:23.205345] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:07.372 [2024-07-25 11:43:23.205426] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:07.372 [2024-07-25 11:43:23.205440] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:32:07.631 [2024-07-25 11:43:23.402419] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:09.007 11:43:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@580 -- # return 0 00:32:09.007 00:32:09.007 real 0m17.923s 00:32:09.007 user 0m32.275s 00:32:09.007 sys 0m2.347s 00:32:09.007 11:43:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:09.007 ************************************ 00:32:09.007 END TEST raid_superblock_test_md_separate 00:32:09.007 ************************************ 00:32:09.007 11:43:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:09.007 11:43:24 bdev_raid -- bdev/bdev_raid.sh@985 -- # '[' true = true ']' 00:32:09.007 11:43:24 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:32:09.007 11:43:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:09.007 11:43:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:09.007 11:43:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:09.007 ************************************ 00:32:09.007 START TEST raid_rebuild_test_sb_md_separate 00:32:09.007 ************************************ 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # local verify=true 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # local strip_size 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # local create_arg 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # local data_offset 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # raid_pid=102569 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # waitforlisten 102569 /var/tmp/spdk-raid.sock 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 102569 ']' 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:09.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:09.007 11:43:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:09.007 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:09.007 Zero copy mechanism will not be used. 00:32:09.007 [2024-07-25 11:43:24.711111] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:32:09.007 [2024-07-25 11:43:24.711276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102569 ] 00:32:09.007 [2024-07-25 11:43:24.871931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.265 [2024-07-25 11:43:25.109578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.524 [2024-07-25 11:43:25.310027] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:09.524 [2024-07-25 11:43:25.310108] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:10.090 11:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:10.090 11:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:32:10.090 11:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:10.090 11:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:32:10.090 BaseBdev1_malloc 00:32:10.348 11:43:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:10.348 [2024-07-25 11:43:26.197836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:10.348 [2024-07-25 11:43:26.197932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:10.348 [2024-07-25 11:43:26.197972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:10.349 [2024-07-25 11:43:26.197990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:10.349 [2024-07-25 11:43:26.200432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:10.349 [2024-07-25 11:43:26.200475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:10.349 BaseBdev1 00:32:10.349 11:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:10.349 11:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:32:10.625 BaseBdev2_malloc 00:32:10.891 11:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:10.891 [2024-07-25 11:43:26.722268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:10.891 [2024-07-25 11:43:26.722362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:10.891 [2024-07-25 11:43:26.722404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:10.891 [2024-07-25 11:43:26.722420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:10.891 [2024-07-25 11:43:26.724894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:10.891 [2024-07-25 11:43:26.724939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:10.891 BaseBdev2 00:32:10.891 11:43:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:32:11.458 spare_malloc 00:32:11.458 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:11.458 spare_delay 00:32:11.458 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:11.716 [2024-07-25 11:43:27.547247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:11.716 [2024-07-25 11:43:27.547347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:11.716 [2024-07-25 11:43:27.547392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:11.716 [2024-07-25 11:43:27.547410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:11.716 [2024-07-25 11:43:27.549960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:11.716 [2024-07-25 11:43:27.550002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:11.716 spare 00:32:11.716 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:32:11.974 [2024-07-25 11:43:27.831383] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:11.974 [2024-07-25 11:43:27.833766] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:11.974 [2024-07-25 11:43:27.834030] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:11.974 [2024-07-25 11:43:27.834050] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:11.974 [2024-07-25 11:43:27.834183] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:11.974 [2024-07-25 11:43:27.834354] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:11.974 [2024-07-25 11:43:27.834376] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:11.974 [2024-07-25 11:43:27.834521] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:11.975 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:11.975 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:11.975 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:11.975 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:11.975 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:11.975 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:11.975 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:11.975 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:11.975 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:11.975 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:12.233 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.233 11:43:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.233 11:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:12.233 "name": "raid_bdev1", 00:32:12.233 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:12.233 "strip_size_kb": 0, 00:32:12.233 "state": "online", 00:32:12.233 "raid_level": "raid1", 00:32:12.233 "superblock": true, 00:32:12.233 "num_base_bdevs": 2, 00:32:12.233 "num_base_bdevs_discovered": 2, 00:32:12.233 "num_base_bdevs_operational": 2, 00:32:12.233 "base_bdevs_list": [ 00:32:12.233 { 00:32:12.233 "name": "BaseBdev1", 00:32:12.233 "uuid": "484c5324-0707-5ab6-a4b8-f5fd9055acaa", 00:32:12.233 "is_configured": true, 00:32:12.233 "data_offset": 256, 00:32:12.233 "data_size": 7936 00:32:12.233 }, 00:32:12.233 { 00:32:12.233 "name": "BaseBdev2", 00:32:12.233 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:12.233 "is_configured": true, 00:32:12.233 "data_offset": 256, 00:32:12.233 "data_size": 7936 00:32:12.233 } 00:32:12.233 ] 00:32:12.233 }' 00:32:12.233 11:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:12.233 11:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:13.168 11:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:32:13.168 11:43:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:13.168 [2024-07-25 11:43:29.015978] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:13.168 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:32:13.168 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.168 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:13.427 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:13.685 [2024-07-25 11:43:29.487823] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:13.685 /dev/nbd0 00:32:13.685 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:13.685 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:13.685 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:13.685 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:32:13.685 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:13.685 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:13.685 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:13.685 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:32:13.685 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:13.685 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:13.686 1+0 records in 00:32:13.686 1+0 records out 00:32:13.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429531 s, 9.5 MB/s 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:32:13.686 11:43:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:14.620 7936+0 records in 00:32:14.620 7936+0 records out 00:32:14.620 32505856 bytes (33 MB, 31 MiB) copied, 0.879811 s, 36.9 MB/s 00:32:14.620 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:14.620 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:14.620 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:14.620 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:14.620 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:14.620 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:14.620 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:14.878 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:14.879 [2024-07-25 11:43:30.692143] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:14.879 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:14.879 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:14.879 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:14.879 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:14.879 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:14.879 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:14.879 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:14.879 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:15.142 [2024-07-25 11:43:30.912985] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:15.142 11:43:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:15.402 11:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:15.402 "name": "raid_bdev1", 00:32:15.402 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:15.402 "strip_size_kb": 0, 00:32:15.402 "state": "online", 00:32:15.402 "raid_level": "raid1", 00:32:15.402 "superblock": true, 00:32:15.402 "num_base_bdevs": 2, 00:32:15.402 "num_base_bdevs_discovered": 1, 00:32:15.402 "num_base_bdevs_operational": 1, 00:32:15.402 "base_bdevs_list": [ 00:32:15.402 { 00:32:15.402 "name": null, 00:32:15.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:15.402 "is_configured": false, 00:32:15.402 "data_offset": 256, 00:32:15.402 "data_size": 7936 00:32:15.402 }, 00:32:15.402 { 00:32:15.402 "name": "BaseBdev2", 00:32:15.402 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:15.402 "is_configured": true, 00:32:15.402 "data_offset": 256, 00:32:15.402 "data_size": 7936 00:32:15.402 } 00:32:15.402 ] 00:32:15.402 }' 00:32:15.402 11:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:15.402 11:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:16.336 11:43:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:16.336 [2024-07-25 11:43:32.101331] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:16.336 [2024-07-25 11:43:32.114933] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:32:16.336 [2024-07-25 11:43:32.117319] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:16.336 11:43:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:17.270 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:17.270 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:17.270 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:17.270 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:17.270 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:17.270 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.270 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.529 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:17.529 "name": "raid_bdev1", 00:32:17.529 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:17.529 "strip_size_kb": 0, 00:32:17.529 "state": "online", 00:32:17.529 "raid_level": "raid1", 00:32:17.529 "superblock": true, 00:32:17.529 "num_base_bdevs": 2, 00:32:17.529 "num_base_bdevs_discovered": 2, 00:32:17.529 "num_base_bdevs_operational": 2, 00:32:17.529 "process": { 00:32:17.529 "type": "rebuild", 00:32:17.529 "target": "spare", 00:32:17.529 "progress": { 00:32:17.529 "blocks": 3072, 00:32:17.529 "percent": 38 00:32:17.529 } 00:32:17.529 }, 00:32:17.529 "base_bdevs_list": [ 00:32:17.529 { 00:32:17.529 "name": "spare", 00:32:17.529 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:17.529 "is_configured": true, 00:32:17.529 "data_offset": 256, 00:32:17.529 "data_size": 7936 00:32:17.529 }, 00:32:17.529 { 00:32:17.529 "name": "BaseBdev2", 00:32:17.529 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:17.529 "is_configured": true, 00:32:17.529 "data_offset": 256, 00:32:17.529 "data_size": 7936 00:32:17.529 } 00:32:17.529 ] 00:32:17.529 }' 00:32:17.529 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:17.529 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:17.786 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:17.786 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:17.786 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:18.045 [2024-07-25 11:43:33.703356] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:18.045 [2024-07-25 11:43:33.729330] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:18.045 [2024-07-25 11:43:33.729419] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:18.045 [2024-07-25 11:43:33.729448] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:18.045 [2024-07-25 11:43:33.729461] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.045 11:43:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.303 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:18.303 "name": "raid_bdev1", 00:32:18.303 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:18.303 "strip_size_kb": 0, 00:32:18.303 "state": "online", 00:32:18.303 "raid_level": "raid1", 00:32:18.303 "superblock": true, 00:32:18.303 "num_base_bdevs": 2, 00:32:18.303 "num_base_bdevs_discovered": 1, 00:32:18.303 "num_base_bdevs_operational": 1, 00:32:18.303 "base_bdevs_list": [ 00:32:18.303 { 00:32:18.303 "name": null, 00:32:18.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.303 "is_configured": false, 00:32:18.303 "data_offset": 256, 00:32:18.303 "data_size": 7936 00:32:18.303 }, 00:32:18.303 { 00:32:18.303 "name": "BaseBdev2", 00:32:18.303 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:18.303 "is_configured": true, 00:32:18.303 "data_offset": 256, 00:32:18.303 "data_size": 7936 00:32:18.303 } 00:32:18.303 ] 00:32:18.303 }' 00:32:18.303 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:18.303 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:18.905 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:18.905 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:18.905 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:18.905 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:18.905 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:18.905 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.906 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.164 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:19.164 "name": "raid_bdev1", 00:32:19.164 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:19.164 "strip_size_kb": 0, 00:32:19.164 "state": "online", 00:32:19.164 "raid_level": "raid1", 00:32:19.164 "superblock": true, 00:32:19.164 "num_base_bdevs": 2, 00:32:19.164 "num_base_bdevs_discovered": 1, 00:32:19.164 "num_base_bdevs_operational": 1, 00:32:19.164 "base_bdevs_list": [ 00:32:19.164 { 00:32:19.164 "name": null, 00:32:19.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.164 "is_configured": false, 00:32:19.164 "data_offset": 256, 00:32:19.164 "data_size": 7936 00:32:19.164 }, 00:32:19.164 { 00:32:19.164 "name": "BaseBdev2", 00:32:19.164 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:19.164 "is_configured": true, 00:32:19.164 "data_offset": 256, 00:32:19.164 "data_size": 7936 00:32:19.164 } 00:32:19.164 ] 00:32:19.164 }' 00:32:19.165 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:19.165 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:19.165 11:43:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:19.165 11:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:19.165 11:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:19.423 [2024-07-25 11:43:35.267901] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:19.423 [2024-07-25 11:43:35.280714] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:32:19.423 [2024-07-25 11:43:35.283090] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:19.423 11:43:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@678 -- # sleep 1 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:20.814 "name": "raid_bdev1", 00:32:20.814 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:20.814 "strip_size_kb": 0, 00:32:20.814 "state": "online", 00:32:20.814 "raid_level": "raid1", 00:32:20.814 "superblock": true, 00:32:20.814 "num_base_bdevs": 2, 00:32:20.814 "num_base_bdevs_discovered": 2, 00:32:20.814 "num_base_bdevs_operational": 2, 00:32:20.814 "process": { 00:32:20.814 "type": "rebuild", 00:32:20.814 "target": "spare", 00:32:20.814 "progress": { 00:32:20.814 "blocks": 3072, 00:32:20.814 "percent": 38 00:32:20.814 } 00:32:20.814 }, 00:32:20.814 "base_bdevs_list": [ 00:32:20.814 { 00:32:20.814 "name": "spare", 00:32:20.814 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:20.814 "is_configured": true, 00:32:20.814 "data_offset": 256, 00:32:20.814 "data_size": 7936 00:32:20.814 }, 00:32:20.814 { 00:32:20.814 "name": "BaseBdev2", 00:32:20.814 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:20.814 "is_configured": true, 00:32:20.814 "data_offset": 256, 00:32:20.814 "data_size": 7936 00:32:20.814 } 00:32:20.814 ] 00:32:20.814 }' 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:20.814 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:21.072 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:21.072 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:32:21.072 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:32:21.072 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:32:21.072 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:32:21.072 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:32:21.072 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:32:21.072 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # local timeout=1580 00:32:21.072 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:21.072 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:21.073 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:21.073 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:21.073 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:21.073 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:21.073 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.073 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.073 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:21.073 "name": "raid_bdev1", 00:32:21.073 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:21.073 "strip_size_kb": 0, 00:32:21.073 "state": "online", 00:32:21.073 "raid_level": "raid1", 00:32:21.073 "superblock": true, 00:32:21.073 "num_base_bdevs": 2, 00:32:21.073 "num_base_bdevs_discovered": 2, 00:32:21.073 "num_base_bdevs_operational": 2, 00:32:21.073 "process": { 00:32:21.073 "type": "rebuild", 00:32:21.073 "target": "spare", 00:32:21.073 "progress": { 00:32:21.073 "blocks": 4096, 00:32:21.073 "percent": 51 00:32:21.073 } 00:32:21.073 }, 00:32:21.073 "base_bdevs_list": [ 00:32:21.073 { 00:32:21.073 "name": "spare", 00:32:21.073 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:21.073 "is_configured": true, 00:32:21.073 "data_offset": 256, 00:32:21.073 "data_size": 7936 00:32:21.073 }, 00:32:21.073 { 00:32:21.073 "name": "BaseBdev2", 00:32:21.073 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:21.073 "is_configured": true, 00:32:21.073 "data_offset": 256, 00:32:21.073 "data_size": 7936 00:32:21.073 } 00:32:21.073 ] 00:32:21.073 }' 00:32:21.073 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:21.332 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:21.332 11:43:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:21.332 11:43:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:21.332 11:43:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:22.304 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:22.304 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:22.304 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:22.304 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:22.304 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:22.304 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:22.304 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.304 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.563 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:22.564 "name": "raid_bdev1", 00:32:22.564 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:22.564 "strip_size_kb": 0, 00:32:22.564 "state": "online", 00:32:22.564 "raid_level": "raid1", 00:32:22.564 "superblock": true, 00:32:22.564 "num_base_bdevs": 2, 00:32:22.564 "num_base_bdevs_discovered": 2, 00:32:22.564 "num_base_bdevs_operational": 2, 00:32:22.564 "process": { 00:32:22.564 "type": "rebuild", 00:32:22.564 "target": "spare", 00:32:22.564 "progress": { 00:32:22.564 "blocks": 7424, 00:32:22.564 "percent": 93 00:32:22.564 } 00:32:22.564 }, 00:32:22.564 "base_bdevs_list": [ 00:32:22.564 { 00:32:22.564 "name": "spare", 00:32:22.564 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:22.564 "is_configured": true, 00:32:22.564 "data_offset": 256, 00:32:22.564 "data_size": 7936 00:32:22.564 }, 00:32:22.564 { 00:32:22.564 "name": "BaseBdev2", 00:32:22.564 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:22.564 "is_configured": true, 00:32:22.564 "data_offset": 256, 00:32:22.564 "data_size": 7936 00:32:22.564 } 00:32:22.564 ] 00:32:22.564 }' 00:32:22.564 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:22.564 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:22.564 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:22.564 [2024-07-25 11:43:38.406034] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:22.564 [2024-07-25 11:43:38.406158] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:22.564 [2024-07-25 11:43:38.406319] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:22.564 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:22.564 11:43:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:23.941 "name": "raid_bdev1", 00:32:23.941 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:23.941 "strip_size_kb": 0, 00:32:23.941 "state": "online", 00:32:23.941 "raid_level": "raid1", 00:32:23.941 "superblock": true, 00:32:23.941 "num_base_bdevs": 2, 00:32:23.941 "num_base_bdevs_discovered": 2, 00:32:23.941 "num_base_bdevs_operational": 2, 00:32:23.941 "base_bdevs_list": [ 00:32:23.941 { 00:32:23.941 "name": "spare", 00:32:23.941 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:23.941 "is_configured": true, 00:32:23.941 "data_offset": 256, 00:32:23.941 "data_size": 7936 00:32:23.941 }, 00:32:23.941 { 00:32:23.941 "name": "BaseBdev2", 00:32:23.941 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:23.941 "is_configured": true, 00:32:23.941 "data_offset": 256, 00:32:23.941 "data_size": 7936 00:32:23.941 } 00:32:23.941 ] 00:32:23.941 }' 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # break 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.941 11:43:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.507 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:24.507 "name": "raid_bdev1", 00:32:24.507 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:24.507 "strip_size_kb": 0, 00:32:24.507 "state": "online", 00:32:24.507 "raid_level": "raid1", 00:32:24.507 "superblock": true, 00:32:24.507 "num_base_bdevs": 2, 00:32:24.508 "num_base_bdevs_discovered": 2, 00:32:24.508 "num_base_bdevs_operational": 2, 00:32:24.508 "base_bdevs_list": [ 00:32:24.508 { 00:32:24.508 "name": "spare", 00:32:24.508 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:24.508 "is_configured": true, 00:32:24.508 "data_offset": 256, 00:32:24.508 "data_size": 7936 00:32:24.508 }, 00:32:24.508 { 00:32:24.508 "name": "BaseBdev2", 00:32:24.508 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:24.508 "is_configured": true, 00:32:24.508 "data_offset": 256, 00:32:24.508 "data_size": 7936 00:32:24.508 } 00:32:24.508 ] 00:32:24.508 }' 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.508 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.766 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:24.766 "name": "raid_bdev1", 00:32:24.766 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:24.766 "strip_size_kb": 0, 00:32:24.766 "state": "online", 00:32:24.766 "raid_level": "raid1", 00:32:24.766 "superblock": true, 00:32:24.766 "num_base_bdevs": 2, 00:32:24.766 "num_base_bdevs_discovered": 2, 00:32:24.766 "num_base_bdevs_operational": 2, 00:32:24.766 "base_bdevs_list": [ 00:32:24.766 { 00:32:24.766 "name": "spare", 00:32:24.766 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:24.766 "is_configured": true, 00:32:24.766 "data_offset": 256, 00:32:24.766 "data_size": 7936 00:32:24.766 }, 00:32:24.766 { 00:32:24.766 "name": "BaseBdev2", 00:32:24.766 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:24.766 "is_configured": true, 00:32:24.766 "data_offset": 256, 00:32:24.766 "data_size": 7936 00:32:24.766 } 00:32:24.766 ] 00:32:24.766 }' 00:32:24.766 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:24.766 11:43:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.333 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:25.591 [2024-07-25 11:43:41.309318] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:25.591 [2024-07-25 11:43:41.309356] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:25.591 [2024-07-25 11:43:41.309500] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:25.591 [2024-07-25 11:43:41.309593] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:25.592 [2024-07-25 11:43:41.309615] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:25.592 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # jq length 00:32:25.592 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:25.850 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:26.108 /dev/nbd0 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:26.108 1+0 records in 00:32:26.108 1+0 records out 00:32:26.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043887 s, 9.3 MB/s 00:32:26.108 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:26.109 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:32:26.109 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:26.109 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:26.109 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:32:26.109 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:26.109 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:26.109 11:43:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:26.368 /dev/nbd1 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:26.368 1+0 records in 00:32:26.368 1+0 records out 00:32:26.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376827 s, 10.9 MB/s 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:26.368 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:26.627 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:26.627 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:26.627 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:26.627 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:26.627 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:26.627 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:26.627 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:26.885 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:27.143 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:27.144 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:27.144 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:27.144 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:27.144 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:27.144 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:27.144 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:27.144 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:32:27.144 11:43:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:27.403 [2024-07-25 11:43:43.263854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:27.403 [2024-07-25 11:43:43.264164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.403 [2024-07-25 11:43:43.264241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:27.403 [2024-07-25 11:43:43.264373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.403 [2024-07-25 11:43:43.266948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.403 [2024-07-25 11:43:43.266999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:27.403 [2024-07-25 11:43:43.267089] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:27.403 [2024-07-25 11:43:43.267167] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:27.403 [2024-07-25 11:43:43.267341] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:27.403 spare 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:27.403 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:27.662 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.662 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:27.662 [2024-07-25 11:43:43.367477] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:27.662 [2024-07-25 11:43:43.367532] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:27.662 [2024-07-25 11:43:43.367709] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:32:27.663 [2024-07-25 11:43:43.367953] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:27.663 [2024-07-25 11:43:43.367974] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:32:27.663 [2024-07-25 11:43:43.368134] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:27.663 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:27.663 "name": "raid_bdev1", 00:32:27.663 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:27.663 "strip_size_kb": 0, 00:32:27.663 "state": "online", 00:32:27.663 "raid_level": "raid1", 00:32:27.663 "superblock": true, 00:32:27.663 "num_base_bdevs": 2, 00:32:27.663 "num_base_bdevs_discovered": 2, 00:32:27.663 "num_base_bdevs_operational": 2, 00:32:27.663 "base_bdevs_list": [ 00:32:27.663 { 00:32:27.663 "name": "spare", 00:32:27.663 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:27.663 "is_configured": true, 00:32:27.921 "data_offset": 256, 00:32:27.921 "data_size": 7936 00:32:27.921 }, 00:32:27.921 { 00:32:27.921 "name": "BaseBdev2", 00:32:27.921 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:27.921 "is_configured": true, 00:32:27.921 "data_offset": 256, 00:32:27.921 "data_size": 7936 00:32:27.921 } 00:32:27.921 ] 00:32:27.921 }' 00:32:27.921 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:27.921 11:43:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.488 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:28.488 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:28.488 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:28.488 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:28.488 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:28.489 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.489 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:28.747 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:28.747 "name": "raid_bdev1", 00:32:28.747 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:28.747 "strip_size_kb": 0, 00:32:28.747 "state": "online", 00:32:28.747 "raid_level": "raid1", 00:32:28.747 "superblock": true, 00:32:28.747 "num_base_bdevs": 2, 00:32:28.747 "num_base_bdevs_discovered": 2, 00:32:28.747 "num_base_bdevs_operational": 2, 00:32:28.747 "base_bdevs_list": [ 00:32:28.747 { 00:32:28.747 "name": "spare", 00:32:28.747 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:28.747 "is_configured": true, 00:32:28.747 "data_offset": 256, 00:32:28.747 "data_size": 7936 00:32:28.747 }, 00:32:28.747 { 00:32:28.747 "name": "BaseBdev2", 00:32:28.747 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:28.747 "is_configured": true, 00:32:28.747 "data_offset": 256, 00:32:28.747 "data_size": 7936 00:32:28.747 } 00:32:28.747 ] 00:32:28.747 }' 00:32:28.747 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:28.747 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:28.747 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:28.747 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:28.747 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.747 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:29.006 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:32:29.006 11:43:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:29.266 [2024-07-25 11:43:45.092702] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.266 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.524 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.524 "name": "raid_bdev1", 00:32:29.524 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:29.524 "strip_size_kb": 0, 00:32:29.524 "state": "online", 00:32:29.524 "raid_level": "raid1", 00:32:29.524 "superblock": true, 00:32:29.524 "num_base_bdevs": 2, 00:32:29.524 "num_base_bdevs_discovered": 1, 00:32:29.524 "num_base_bdevs_operational": 1, 00:32:29.524 "base_bdevs_list": [ 00:32:29.524 { 00:32:29.524 "name": null, 00:32:29.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.524 "is_configured": false, 00:32:29.524 "data_offset": 256, 00:32:29.524 "data_size": 7936 00:32:29.524 }, 00:32:29.524 { 00:32:29.524 "name": "BaseBdev2", 00:32:29.524 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:29.524 "is_configured": true, 00:32:29.524 "data_offset": 256, 00:32:29.524 "data_size": 7936 00:32:29.524 } 00:32:29.524 ] 00:32:29.524 }' 00:32:29.524 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.524 11:43:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.140 11:43:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:30.707 [2024-07-25 11:43:46.309110] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:30.707 [2024-07-25 11:43:46.309345] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:30.707 [2024-07-25 11:43:46.309365] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:30.707 [2024-07-25 11:43:46.309435] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:30.707 [2024-07-25 11:43:46.322431] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:32:30.707 [2024-07-25 11:43:46.324905] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:30.707 11:43:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # sleep 1 00:32:31.641 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:31.641 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:31.641 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:31.641 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:31.641 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:31.641 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:31.641 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.900 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:31.900 "name": "raid_bdev1", 00:32:31.900 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:31.900 "strip_size_kb": 0, 00:32:31.900 "state": "online", 00:32:31.900 "raid_level": "raid1", 00:32:31.900 "superblock": true, 00:32:31.900 "num_base_bdevs": 2, 00:32:31.900 "num_base_bdevs_discovered": 2, 00:32:31.900 "num_base_bdevs_operational": 2, 00:32:31.900 "process": { 00:32:31.900 "type": "rebuild", 00:32:31.900 "target": "spare", 00:32:31.900 "progress": { 00:32:31.900 "blocks": 3072, 00:32:31.900 "percent": 38 00:32:31.900 } 00:32:31.900 }, 00:32:31.900 "base_bdevs_list": [ 00:32:31.900 { 00:32:31.900 "name": "spare", 00:32:31.900 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:31.900 "is_configured": true, 00:32:31.900 "data_offset": 256, 00:32:31.900 "data_size": 7936 00:32:31.900 }, 00:32:31.900 { 00:32:31.900 "name": "BaseBdev2", 00:32:31.900 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:31.900 "is_configured": true, 00:32:31.900 "data_offset": 256, 00:32:31.900 "data_size": 7936 00:32:31.900 } 00:32:31.900 ] 00:32:31.900 }' 00:32:31.900 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:31.900 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:31.900 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:31.900 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:31.901 11:43:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:32.159 [2024-07-25 11:43:47.987136] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:32.159 [2024-07-25 11:43:48.037952] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:32.159 [2024-07-25 11:43:48.038064] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:32.159 [2024-07-25 11:43:48.038091] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:32.159 [2024-07-25 11:43:48.038102] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.417 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:32.675 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:32.675 "name": "raid_bdev1", 00:32:32.675 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:32.675 "strip_size_kb": 0, 00:32:32.675 "state": "online", 00:32:32.675 "raid_level": "raid1", 00:32:32.675 "superblock": true, 00:32:32.675 "num_base_bdevs": 2, 00:32:32.675 "num_base_bdevs_discovered": 1, 00:32:32.675 "num_base_bdevs_operational": 1, 00:32:32.675 "base_bdevs_list": [ 00:32:32.675 { 00:32:32.675 "name": null, 00:32:32.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.675 "is_configured": false, 00:32:32.675 "data_offset": 256, 00:32:32.675 "data_size": 7936 00:32:32.675 }, 00:32:32.675 { 00:32:32.675 "name": "BaseBdev2", 00:32:32.675 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:32.675 "is_configured": true, 00:32:32.675 "data_offset": 256, 00:32:32.675 "data_size": 7936 00:32:32.675 } 00:32:32.675 ] 00:32:32.675 }' 00:32:32.675 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:32.675 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.242 11:43:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:33.500 [2024-07-25 11:43:49.304224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:33.500 [2024-07-25 11:43:49.304326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:33.500 [2024-07-25 11:43:49.304365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:33.500 [2024-07-25 11:43:49.304381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:33.500 [2024-07-25 11:43:49.304753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:33.500 [2024-07-25 11:43:49.304778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:33.500 [2024-07-25 11:43:49.304860] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:33.500 [2024-07-25 11:43:49.304878] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:33.500 [2024-07-25 11:43:49.304894] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:33.500 [2024-07-25 11:43:49.304930] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:33.500 [2024-07-25 11:43:49.318020] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:32:33.500 spare 00:32:33.500 [2024-07-25 11:43:49.320403] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:33.500 11:43:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # sleep 1 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:34.874 "name": "raid_bdev1", 00:32:34.874 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:34.874 "strip_size_kb": 0, 00:32:34.874 "state": "online", 00:32:34.874 "raid_level": "raid1", 00:32:34.874 "superblock": true, 00:32:34.874 "num_base_bdevs": 2, 00:32:34.874 "num_base_bdevs_discovered": 2, 00:32:34.874 "num_base_bdevs_operational": 2, 00:32:34.874 "process": { 00:32:34.874 "type": "rebuild", 00:32:34.874 "target": "spare", 00:32:34.874 "progress": { 00:32:34.874 "blocks": 3072, 00:32:34.874 "percent": 38 00:32:34.874 } 00:32:34.874 }, 00:32:34.874 "base_bdevs_list": [ 00:32:34.874 { 00:32:34.874 "name": "spare", 00:32:34.874 "uuid": "7ee46ad7-c57e-5d2b-bea4-6a03dd5f0f7c", 00:32:34.874 "is_configured": true, 00:32:34.874 "data_offset": 256, 00:32:34.874 "data_size": 7936 00:32:34.874 }, 00:32:34.874 { 00:32:34.874 "name": "BaseBdev2", 00:32:34.874 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:34.874 "is_configured": true, 00:32:34.874 "data_offset": 256, 00:32:34.874 "data_size": 7936 00:32:34.874 } 00:32:34.874 ] 00:32:34.874 }' 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:34.874 11:43:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:35.132 [2024-07-25 11:43:50.970526] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:35.391 [2024-07-25 11:43:51.033395] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:35.391 [2024-07-25 11:43:51.033520] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:35.391 [2024-07-25 11:43:51.033543] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:35.391 [2024-07-25 11:43:51.033556] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.391 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.650 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:35.650 "name": "raid_bdev1", 00:32:35.650 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:35.650 "strip_size_kb": 0, 00:32:35.650 "state": "online", 00:32:35.650 "raid_level": "raid1", 00:32:35.650 "superblock": true, 00:32:35.650 "num_base_bdevs": 2, 00:32:35.650 "num_base_bdevs_discovered": 1, 00:32:35.650 "num_base_bdevs_operational": 1, 00:32:35.650 "base_bdevs_list": [ 00:32:35.650 { 00:32:35.650 "name": null, 00:32:35.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.650 "is_configured": false, 00:32:35.650 "data_offset": 256, 00:32:35.650 "data_size": 7936 00:32:35.650 }, 00:32:35.650 { 00:32:35.650 "name": "BaseBdev2", 00:32:35.650 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:35.650 "is_configured": true, 00:32:35.650 "data_offset": 256, 00:32:35.650 "data_size": 7936 00:32:35.650 } 00:32:35.650 ] 00:32:35.650 }' 00:32:35.650 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:35.650 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:36.278 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:36.278 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:36.278 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:36.278 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:36.278 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:36.278 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.278 11:43:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.536 11:43:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:36.536 "name": "raid_bdev1", 00:32:36.536 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:36.536 "strip_size_kb": 0, 00:32:36.536 "state": "online", 00:32:36.536 "raid_level": "raid1", 00:32:36.536 "superblock": true, 00:32:36.536 "num_base_bdevs": 2, 00:32:36.536 "num_base_bdevs_discovered": 1, 00:32:36.536 "num_base_bdevs_operational": 1, 00:32:36.536 "base_bdevs_list": [ 00:32:36.536 { 00:32:36.536 "name": null, 00:32:36.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.536 "is_configured": false, 00:32:36.536 "data_offset": 256, 00:32:36.536 "data_size": 7936 00:32:36.536 }, 00:32:36.536 { 00:32:36.536 "name": "BaseBdev2", 00:32:36.536 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:36.536 "is_configured": true, 00:32:36.536 "data_offset": 256, 00:32:36.536 "data_size": 7936 00:32:36.536 } 00:32:36.536 ] 00:32:36.536 }' 00:32:36.536 11:43:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:36.536 11:43:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:36.536 11:43:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:36.536 11:43:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:36.536 11:43:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:36.794 11:43:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:37.052 [2024-07-25 11:43:52.804316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:37.052 [2024-07-25 11:43:52.804467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:37.052 [2024-07-25 11:43:52.804531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:37.052 [2024-07-25 11:43:52.804550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:37.052 [2024-07-25 11:43:52.804841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:37.052 [2024-07-25 11:43:52.804870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:37.052 [2024-07-25 11:43:52.804945] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:37.052 [2024-07-25 11:43:52.804975] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:37.052 [2024-07-25 11:43:52.804996] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:37.052 BaseBdev1 00:32:37.052 11:43:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # sleep 1 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.986 11:43:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.244 11:43:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:38.244 "name": "raid_bdev1", 00:32:38.244 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:38.244 "strip_size_kb": 0, 00:32:38.244 "state": "online", 00:32:38.244 "raid_level": "raid1", 00:32:38.244 "superblock": true, 00:32:38.244 "num_base_bdevs": 2, 00:32:38.244 "num_base_bdevs_discovered": 1, 00:32:38.244 "num_base_bdevs_operational": 1, 00:32:38.244 "base_bdevs_list": [ 00:32:38.244 { 00:32:38.244 "name": null, 00:32:38.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.244 "is_configured": false, 00:32:38.244 "data_offset": 256, 00:32:38.244 "data_size": 7936 00:32:38.244 }, 00:32:38.244 { 00:32:38.244 "name": "BaseBdev2", 00:32:38.244 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:38.244 "is_configured": true, 00:32:38.244 "data_offset": 256, 00:32:38.244 "data_size": 7936 00:32:38.244 } 00:32:38.244 ] 00:32:38.244 }' 00:32:38.244 11:43:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:38.244 11:43:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.180 11:43:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:39.180 11:43:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:39.180 11:43:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:39.180 11:43:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:39.180 11:43:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:39.180 11:43:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:39.180 11:43:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:39.180 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:39.180 "name": "raid_bdev1", 00:32:39.180 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:39.180 "strip_size_kb": 0, 00:32:39.180 "state": "online", 00:32:39.180 "raid_level": "raid1", 00:32:39.180 "superblock": true, 00:32:39.180 "num_base_bdevs": 2, 00:32:39.180 "num_base_bdevs_discovered": 1, 00:32:39.180 "num_base_bdevs_operational": 1, 00:32:39.180 "base_bdevs_list": [ 00:32:39.180 { 00:32:39.180 "name": null, 00:32:39.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.180 "is_configured": false, 00:32:39.180 "data_offset": 256, 00:32:39.180 "data_size": 7936 00:32:39.180 }, 00:32:39.180 { 00:32:39.180 "name": "BaseBdev2", 00:32:39.180 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:39.180 "is_configured": true, 00:32:39.180 "data_offset": 256, 00:32:39.180 "data_size": 7936 00:32:39.180 } 00:32:39.180 ] 00:32:39.180 }' 00:32:39.180 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:39.438 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.439 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:39.439 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:39.439 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:39.696 [2024-07-25 11:43:55.413170] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:39.696 [2024-07-25 11:43:55.413388] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:39.696 [2024-07-25 11:43:55.413415] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:39.696 request: 00:32:39.696 { 00:32:39.696 "base_bdev": "BaseBdev1", 00:32:39.696 "raid_bdev": "raid_bdev1", 00:32:39.696 "method": "bdev_raid_add_base_bdev", 00:32:39.696 "req_id": 1 00:32:39.696 } 00:32:39.697 Got JSON-RPC error response 00:32:39.697 response: 00:32:39.697 { 00:32:39.697 "code": -22, 00:32:39.697 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:39.697 } 00:32:39.697 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:32:39.697 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:39.697 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:39.697 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:39.697 11:43:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@793 -- # sleep 1 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.631 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.890 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:40.890 "name": "raid_bdev1", 00:32:40.890 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:40.890 "strip_size_kb": 0, 00:32:40.890 "state": "online", 00:32:40.890 "raid_level": "raid1", 00:32:40.890 "superblock": true, 00:32:40.890 "num_base_bdevs": 2, 00:32:40.890 "num_base_bdevs_discovered": 1, 00:32:40.890 "num_base_bdevs_operational": 1, 00:32:40.890 "base_bdevs_list": [ 00:32:40.890 { 00:32:40.890 "name": null, 00:32:40.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.890 "is_configured": false, 00:32:40.890 "data_offset": 256, 00:32:40.890 "data_size": 7936 00:32:40.890 }, 00:32:40.890 { 00:32:40.890 "name": "BaseBdev2", 00:32:40.890 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:40.890 "is_configured": true, 00:32:40.890 "data_offset": 256, 00:32:40.890 "data_size": 7936 00:32:40.890 } 00:32:40.890 ] 00:32:40.890 }' 00:32:40.890 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:40.890 11:43:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:41.825 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:41.825 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:41.825 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:41.825 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:41.825 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:41.825 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.825 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:42.083 "name": "raid_bdev1", 00:32:42.083 "uuid": "e3e1b2ef-eafc-43e1-8873-338c3c1f3961", 00:32:42.083 "strip_size_kb": 0, 00:32:42.083 "state": "online", 00:32:42.083 "raid_level": "raid1", 00:32:42.083 "superblock": true, 00:32:42.083 "num_base_bdevs": 2, 00:32:42.083 "num_base_bdevs_discovered": 1, 00:32:42.083 "num_base_bdevs_operational": 1, 00:32:42.083 "base_bdevs_list": [ 00:32:42.083 { 00:32:42.083 "name": null, 00:32:42.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:42.083 "is_configured": false, 00:32:42.083 "data_offset": 256, 00:32:42.083 "data_size": 7936 00:32:42.083 }, 00:32:42.083 { 00:32:42.083 "name": "BaseBdev2", 00:32:42.083 "uuid": "8bb21f1d-f5d7-582a-9ed6-f946cb5064bf", 00:32:42.083 "is_configured": true, 00:32:42.083 "data_offset": 256, 00:32:42.083 "data_size": 7936 00:32:42.083 } 00:32:42.083 ] 00:32:42.083 }' 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@798 -- # killprocess 102569 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 102569 ']' 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 102569 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102569 00:32:42.083 killing process with pid 102569 00:32:42.083 Received shutdown signal, test time was about 60.000000 seconds 00:32:42.083 00:32:42.083 Latency(us) 00:32:42.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.083 =================================================================================================================== 00:32:42.083 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102569' 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 102569 00:32:42.083 [2024-07-25 11:43:57.917183] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:42.083 11:43:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 102569 00:32:42.083 [2024-07-25 11:43:57.917344] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:42.083 [2024-07-25 11:43:57.917409] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:42.083 [2024-07-25 11:43:57.917429] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:32:42.360 [2024-07-25 11:43:58.210187] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:43.739 11:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@800 -- # return 0 00:32:43.739 00:32:43.739 real 0m34.786s 00:32:43.739 user 0m55.180s 00:32:43.739 sys 0m4.143s 00:32:43.739 11:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:43.739 11:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:43.739 ************************************ 00:32:43.739 END TEST raid_rebuild_test_sb_md_separate 00:32:43.739 ************************************ 00:32:43.739 11:43:59 bdev_raid -- bdev/bdev_raid.sh@989 -- # base_malloc_params='-m 32 -i' 00:32:43.739 11:43:59 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:32:43.739 11:43:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:43.739 11:43:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:43.739 11:43:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:43.739 ************************************ 00:32:43.739 START TEST raid_state_function_test_sb_md_interleaved 00:32:43.739 ************************************ 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:43.739 Process raid pid: 103411 00:32:43.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=103411 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 103411' 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 103411 /var/tmp/spdk-raid.sock 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 103411 ']' 00:32:43.739 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:43.740 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:43.740 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:43.740 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:43.740 11:43:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:43.740 [2024-07-25 11:43:59.571778] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:32:43.740 [2024-07-25 11:43:59.571963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.998 [2024-07-25 11:43:59.752258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.257 [2024-07-25 11:44:00.044278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.516 [2024-07-25 11:44:00.268563] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:44.516 [2024-07-25 11:44:00.268601] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:44.775 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:44.775 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:32:44.775 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:45.034 [2024-07-25 11:44:00.746611] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:45.034 [2024-07-25 11:44:00.746752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:45.034 [2024-07-25 11:44:00.746773] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:45.034 [2024-07-25 11:44:00.746788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:45.034 11:44:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.293 11:44:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:45.293 "name": "Existed_Raid", 00:32:45.293 "uuid": "d7eaaaf2-6b4c-4218-86e2-fe10e03e2003", 00:32:45.293 "strip_size_kb": 0, 00:32:45.293 "state": "configuring", 00:32:45.293 "raid_level": "raid1", 00:32:45.293 "superblock": true, 00:32:45.293 "num_base_bdevs": 2, 00:32:45.293 "num_base_bdevs_discovered": 0, 00:32:45.293 "num_base_bdevs_operational": 2, 00:32:45.293 "base_bdevs_list": [ 00:32:45.293 { 00:32:45.293 "name": "BaseBdev1", 00:32:45.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.293 "is_configured": false, 00:32:45.293 "data_offset": 0, 00:32:45.293 "data_size": 0 00:32:45.293 }, 00:32:45.293 { 00:32:45.293 "name": "BaseBdev2", 00:32:45.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.293 "is_configured": false, 00:32:45.293 "data_offset": 0, 00:32:45.293 "data_size": 0 00:32:45.293 } 00:32:45.293 ] 00:32:45.293 }' 00:32:45.293 11:44:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:45.293 11:44:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:45.907 11:44:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:46.487 [2024-07-25 11:44:02.102934] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:46.487 [2024-07-25 11:44:02.102991] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:46.487 11:44:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:46.744 [2024-07-25 11:44:02.459042] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:46.744 [2024-07-25 11:44:02.459111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:46.744 [2024-07-25 11:44:02.459132] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:46.745 [2024-07-25 11:44:02.459146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:46.745 11:44:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:32:47.003 [2024-07-25 11:44:02.807172] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:47.003 BaseBdev1 00:32:47.003 11:44:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:47.003 11:44:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:47.003 11:44:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:47.003 11:44:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:32:47.003 11:44:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:47.003 11:44:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:47.003 11:44:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:47.568 [ 00:32:47.568 { 00:32:47.568 "name": "BaseBdev1", 00:32:47.568 "aliases": [ 00:32:47.568 "5abe477c-98d7-4adb-b6ef-dbdccacf5dee" 00:32:47.568 ], 00:32:47.568 "product_name": "Malloc disk", 00:32:47.568 "block_size": 4128, 00:32:47.568 "num_blocks": 8192, 00:32:47.568 "uuid": "5abe477c-98d7-4adb-b6ef-dbdccacf5dee", 00:32:47.568 "md_size": 32, 00:32:47.568 "md_interleave": true, 00:32:47.568 "dif_type": 0, 00:32:47.568 "assigned_rate_limits": { 00:32:47.568 "rw_ios_per_sec": 0, 00:32:47.568 "rw_mbytes_per_sec": 0, 00:32:47.568 "r_mbytes_per_sec": 0, 00:32:47.568 "w_mbytes_per_sec": 0 00:32:47.568 }, 00:32:47.568 "claimed": true, 00:32:47.568 "claim_type": "exclusive_write", 00:32:47.568 "zoned": false, 00:32:47.568 "supported_io_types": { 00:32:47.568 "read": true, 00:32:47.568 "write": true, 00:32:47.568 "unmap": true, 00:32:47.568 "flush": true, 00:32:47.568 "reset": true, 00:32:47.568 "nvme_admin": false, 00:32:47.568 "nvme_io": false, 00:32:47.568 "nvme_io_md": false, 00:32:47.568 "write_zeroes": true, 00:32:47.568 "zcopy": true, 00:32:47.568 "get_zone_info": false, 00:32:47.568 "zone_management": false, 00:32:47.568 "zone_append": false, 00:32:47.568 "compare": false, 00:32:47.568 "compare_and_write": false, 00:32:47.568 "abort": true, 00:32:47.568 "seek_hole": false, 00:32:47.568 "seek_data": false, 00:32:47.568 "copy": true, 00:32:47.568 "nvme_iov_md": false 00:32:47.568 }, 00:32:47.568 "memory_domains": [ 00:32:47.568 { 00:32:47.568 "dma_device_id": "system", 00:32:47.568 "dma_device_type": 1 00:32:47.568 }, 00:32:47.568 { 00:32:47.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:47.568 "dma_device_type": 2 00:32:47.568 } 00:32:47.568 ], 00:32:47.568 "driver_specific": {} 00:32:47.568 } 00:32:47.568 ] 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.568 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:48.133 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:48.133 "name": "Existed_Raid", 00:32:48.133 "uuid": "70bb66d0-9a3c-4fee-bc17-45e10d931ce1", 00:32:48.133 "strip_size_kb": 0, 00:32:48.133 "state": "configuring", 00:32:48.133 "raid_level": "raid1", 00:32:48.133 "superblock": true, 00:32:48.133 "num_base_bdevs": 2, 00:32:48.133 "num_base_bdevs_discovered": 1, 00:32:48.133 "num_base_bdevs_operational": 2, 00:32:48.133 "base_bdevs_list": [ 00:32:48.133 { 00:32:48.133 "name": "BaseBdev1", 00:32:48.133 "uuid": "5abe477c-98d7-4adb-b6ef-dbdccacf5dee", 00:32:48.133 "is_configured": true, 00:32:48.133 "data_offset": 256, 00:32:48.133 "data_size": 7936 00:32:48.133 }, 00:32:48.133 { 00:32:48.133 "name": "BaseBdev2", 00:32:48.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:48.133 "is_configured": false, 00:32:48.133 "data_offset": 0, 00:32:48.133 "data_size": 0 00:32:48.133 } 00:32:48.133 ] 00:32:48.133 }' 00:32:48.133 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:48.133 11:44:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:48.697 11:44:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:48.955 [2024-07-25 11:44:04.811917] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:48.955 [2024-07-25 11:44:04.812239] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:48.955 11:44:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:49.520 [2024-07-25 11:44:05.164061] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:49.520 [2024-07-25 11:44:05.166717] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:49.520 [2024-07-25 11:44:05.167551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:49.520 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:49.520 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:49.520 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.521 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:49.829 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:49.829 "name": "Existed_Raid", 00:32:49.829 "uuid": "6399700d-c23e-4872-8d9d-45a073bba5c8", 00:32:49.829 "strip_size_kb": 0, 00:32:49.829 "state": "configuring", 00:32:49.829 "raid_level": "raid1", 00:32:49.829 "superblock": true, 00:32:49.829 "num_base_bdevs": 2, 00:32:49.829 "num_base_bdevs_discovered": 1, 00:32:49.829 "num_base_bdevs_operational": 2, 00:32:49.829 "base_bdevs_list": [ 00:32:49.829 { 00:32:49.829 "name": "BaseBdev1", 00:32:49.829 "uuid": "5abe477c-98d7-4adb-b6ef-dbdccacf5dee", 00:32:49.829 "is_configured": true, 00:32:49.829 "data_offset": 256, 00:32:49.829 "data_size": 7936 00:32:49.829 }, 00:32:49.829 { 00:32:49.829 "name": "BaseBdev2", 00:32:49.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.829 "is_configured": false, 00:32:49.829 "data_offset": 0, 00:32:49.829 "data_size": 0 00:32:49.829 } 00:32:49.829 ] 00:32:49.829 }' 00:32:49.829 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:49.829 11:44:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:50.394 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:32:50.652 [2024-07-25 11:44:06.488449] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:50.652 [2024-07-25 11:44:06.488769] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:50.652 [2024-07-25 11:44:06.488796] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:50.652 [2024-07-25 11:44:06.488907] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:50.652 [2024-07-25 11:44:06.489010] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:50.652 [2024-07-25 11:44:06.489025] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:50.652 [2024-07-25 11:44:06.489110] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:50.652 BaseBdev2 00:32:50.652 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:50.652 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:50.652 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:50.652 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:32:50.652 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:50.652 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:50.652 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:50.911 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:51.170 [ 00:32:51.170 { 00:32:51.170 "name": "BaseBdev2", 00:32:51.170 "aliases": [ 00:32:51.170 "fd0b9a0c-e449-4d27-b020-47db1933f003" 00:32:51.170 ], 00:32:51.170 "product_name": "Malloc disk", 00:32:51.170 "block_size": 4128, 00:32:51.170 "num_blocks": 8192, 00:32:51.170 "uuid": "fd0b9a0c-e449-4d27-b020-47db1933f003", 00:32:51.170 "md_size": 32, 00:32:51.170 "md_interleave": true, 00:32:51.170 "dif_type": 0, 00:32:51.170 "assigned_rate_limits": { 00:32:51.170 "rw_ios_per_sec": 0, 00:32:51.170 "rw_mbytes_per_sec": 0, 00:32:51.170 "r_mbytes_per_sec": 0, 00:32:51.170 "w_mbytes_per_sec": 0 00:32:51.170 }, 00:32:51.170 "claimed": true, 00:32:51.170 "claim_type": "exclusive_write", 00:32:51.170 "zoned": false, 00:32:51.170 "supported_io_types": { 00:32:51.170 "read": true, 00:32:51.170 "write": true, 00:32:51.170 "unmap": true, 00:32:51.170 "flush": true, 00:32:51.170 "reset": true, 00:32:51.170 "nvme_admin": false, 00:32:51.170 "nvme_io": false, 00:32:51.170 "nvme_io_md": false, 00:32:51.170 "write_zeroes": true, 00:32:51.170 "zcopy": true, 00:32:51.170 "get_zone_info": false, 00:32:51.170 "zone_management": false, 00:32:51.170 "zone_append": false, 00:32:51.170 "compare": false, 00:32:51.170 "compare_and_write": false, 00:32:51.170 "abort": true, 00:32:51.170 "seek_hole": false, 00:32:51.170 "seek_data": false, 00:32:51.170 "copy": true, 00:32:51.170 "nvme_iov_md": false 00:32:51.170 }, 00:32:51.170 "memory_domains": [ 00:32:51.170 { 00:32:51.170 "dma_device_id": "system", 00:32:51.170 "dma_device_type": 1 00:32:51.170 }, 00:32:51.170 { 00:32:51.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:51.170 "dma_device_type": 2 00:32:51.170 } 00:32:51.170 ], 00:32:51.170 "driver_specific": {} 00:32:51.170 } 00:32:51.170 ] 00:32:51.170 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:32:51.170 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:51.170 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:51.170 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:51.170 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:51.170 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:51.171 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:51.171 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:51.171 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:51.171 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:51.171 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:51.171 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:51.171 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:51.171 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.171 11:44:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:51.430 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:51.430 "name": "Existed_Raid", 00:32:51.430 "uuid": "6399700d-c23e-4872-8d9d-45a073bba5c8", 00:32:51.430 "strip_size_kb": 0, 00:32:51.430 "state": "online", 00:32:51.430 "raid_level": "raid1", 00:32:51.430 "superblock": true, 00:32:51.430 "num_base_bdevs": 2, 00:32:51.430 "num_base_bdevs_discovered": 2, 00:32:51.430 "num_base_bdevs_operational": 2, 00:32:51.430 "base_bdevs_list": [ 00:32:51.430 { 00:32:51.430 "name": "BaseBdev1", 00:32:51.430 "uuid": "5abe477c-98d7-4adb-b6ef-dbdccacf5dee", 00:32:51.430 "is_configured": true, 00:32:51.430 "data_offset": 256, 00:32:51.430 "data_size": 7936 00:32:51.430 }, 00:32:51.430 { 00:32:51.430 "name": "BaseBdev2", 00:32:51.430 "uuid": "fd0b9a0c-e449-4d27-b020-47db1933f003", 00:32:51.430 "is_configured": true, 00:32:51.430 "data_offset": 256, 00:32:51.430 "data_size": 7936 00:32:51.430 } 00:32:51.430 ] 00:32:51.430 }' 00:32:51.430 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:51.430 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:51.998 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:51.998 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:51.998 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:51.998 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:51.998 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:51.998 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:32:51.998 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:51.998 11:44:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:52.257 [2024-07-25 11:44:08.037372] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:52.257 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:52.257 "name": "Existed_Raid", 00:32:52.257 "aliases": [ 00:32:52.257 "6399700d-c23e-4872-8d9d-45a073bba5c8" 00:32:52.257 ], 00:32:52.257 "product_name": "Raid Volume", 00:32:52.257 "block_size": 4128, 00:32:52.257 "num_blocks": 7936, 00:32:52.257 "uuid": "6399700d-c23e-4872-8d9d-45a073bba5c8", 00:32:52.257 "md_size": 32, 00:32:52.257 "md_interleave": true, 00:32:52.257 "dif_type": 0, 00:32:52.257 "assigned_rate_limits": { 00:32:52.257 "rw_ios_per_sec": 0, 00:32:52.257 "rw_mbytes_per_sec": 0, 00:32:52.257 "r_mbytes_per_sec": 0, 00:32:52.257 "w_mbytes_per_sec": 0 00:32:52.257 }, 00:32:52.257 "claimed": false, 00:32:52.257 "zoned": false, 00:32:52.257 "supported_io_types": { 00:32:52.257 "read": true, 00:32:52.257 "write": true, 00:32:52.257 "unmap": false, 00:32:52.257 "flush": false, 00:32:52.257 "reset": true, 00:32:52.257 "nvme_admin": false, 00:32:52.257 "nvme_io": false, 00:32:52.257 "nvme_io_md": false, 00:32:52.257 "write_zeroes": true, 00:32:52.257 "zcopy": false, 00:32:52.257 "get_zone_info": false, 00:32:52.257 "zone_management": false, 00:32:52.257 "zone_append": false, 00:32:52.257 "compare": false, 00:32:52.257 "compare_and_write": false, 00:32:52.257 "abort": false, 00:32:52.257 "seek_hole": false, 00:32:52.257 "seek_data": false, 00:32:52.257 "copy": false, 00:32:52.257 "nvme_iov_md": false 00:32:52.257 }, 00:32:52.257 "memory_domains": [ 00:32:52.257 { 00:32:52.257 "dma_device_id": "system", 00:32:52.257 "dma_device_type": 1 00:32:52.257 }, 00:32:52.257 { 00:32:52.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:52.257 "dma_device_type": 2 00:32:52.257 }, 00:32:52.257 { 00:32:52.257 "dma_device_id": "system", 00:32:52.257 "dma_device_type": 1 00:32:52.257 }, 00:32:52.257 { 00:32:52.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:52.257 "dma_device_type": 2 00:32:52.257 } 00:32:52.257 ], 00:32:52.257 "driver_specific": { 00:32:52.257 "raid": { 00:32:52.257 "uuid": "6399700d-c23e-4872-8d9d-45a073bba5c8", 00:32:52.257 "strip_size_kb": 0, 00:32:52.257 "state": "online", 00:32:52.257 "raid_level": "raid1", 00:32:52.257 "superblock": true, 00:32:52.257 "num_base_bdevs": 2, 00:32:52.257 "num_base_bdevs_discovered": 2, 00:32:52.257 "num_base_bdevs_operational": 2, 00:32:52.257 "base_bdevs_list": [ 00:32:52.257 { 00:32:52.257 "name": "BaseBdev1", 00:32:52.257 "uuid": "5abe477c-98d7-4adb-b6ef-dbdccacf5dee", 00:32:52.257 "is_configured": true, 00:32:52.257 "data_offset": 256, 00:32:52.257 "data_size": 7936 00:32:52.257 }, 00:32:52.257 { 00:32:52.257 "name": "BaseBdev2", 00:32:52.257 "uuid": "fd0b9a0c-e449-4d27-b020-47db1933f003", 00:32:52.257 "is_configured": true, 00:32:52.257 "data_offset": 256, 00:32:52.257 "data_size": 7936 00:32:52.257 } 00:32:52.257 ] 00:32:52.257 } 00:32:52.257 } 00:32:52.257 }' 00:32:52.257 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:52.257 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:52.257 BaseBdev2' 00:32:52.257 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:52.257 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:52.257 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:52.826 "name": "BaseBdev1", 00:32:52.826 "aliases": [ 00:32:52.826 "5abe477c-98d7-4adb-b6ef-dbdccacf5dee" 00:32:52.826 ], 00:32:52.826 "product_name": "Malloc disk", 00:32:52.826 "block_size": 4128, 00:32:52.826 "num_blocks": 8192, 00:32:52.826 "uuid": "5abe477c-98d7-4adb-b6ef-dbdccacf5dee", 00:32:52.826 "md_size": 32, 00:32:52.826 "md_interleave": true, 00:32:52.826 "dif_type": 0, 00:32:52.826 "assigned_rate_limits": { 00:32:52.826 "rw_ios_per_sec": 0, 00:32:52.826 "rw_mbytes_per_sec": 0, 00:32:52.826 "r_mbytes_per_sec": 0, 00:32:52.826 "w_mbytes_per_sec": 0 00:32:52.826 }, 00:32:52.826 "claimed": true, 00:32:52.826 "claim_type": "exclusive_write", 00:32:52.826 "zoned": false, 00:32:52.826 "supported_io_types": { 00:32:52.826 "read": true, 00:32:52.826 "write": true, 00:32:52.826 "unmap": true, 00:32:52.826 "flush": true, 00:32:52.826 "reset": true, 00:32:52.826 "nvme_admin": false, 00:32:52.826 "nvme_io": false, 00:32:52.826 "nvme_io_md": false, 00:32:52.826 "write_zeroes": true, 00:32:52.826 "zcopy": true, 00:32:52.826 "get_zone_info": false, 00:32:52.826 "zone_management": false, 00:32:52.826 "zone_append": false, 00:32:52.826 "compare": false, 00:32:52.826 "compare_and_write": false, 00:32:52.826 "abort": true, 00:32:52.826 "seek_hole": false, 00:32:52.826 "seek_data": false, 00:32:52.826 "copy": true, 00:32:52.826 "nvme_iov_md": false 00:32:52.826 }, 00:32:52.826 "memory_domains": [ 00:32:52.826 { 00:32:52.826 "dma_device_id": "system", 00:32:52.826 "dma_device_type": 1 00:32:52.826 }, 00:32:52.826 { 00:32:52.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:52.826 "dma_device_type": 2 00:32:52.826 } 00:32:52.826 ], 00:32:52.826 "driver_specific": {} 00:32:52.826 }' 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:52.826 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:53.142 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:53.142 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:53.142 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:53.142 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:53.142 11:44:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:53.414 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:53.414 "name": "BaseBdev2", 00:32:53.414 "aliases": [ 00:32:53.414 "fd0b9a0c-e449-4d27-b020-47db1933f003" 00:32:53.414 ], 00:32:53.414 "product_name": "Malloc disk", 00:32:53.414 "block_size": 4128, 00:32:53.414 "num_blocks": 8192, 00:32:53.414 "uuid": "fd0b9a0c-e449-4d27-b020-47db1933f003", 00:32:53.414 "md_size": 32, 00:32:53.414 "md_interleave": true, 00:32:53.414 "dif_type": 0, 00:32:53.414 "assigned_rate_limits": { 00:32:53.414 "rw_ios_per_sec": 0, 00:32:53.414 "rw_mbytes_per_sec": 0, 00:32:53.414 "r_mbytes_per_sec": 0, 00:32:53.414 "w_mbytes_per_sec": 0 00:32:53.414 }, 00:32:53.414 "claimed": true, 00:32:53.414 "claim_type": "exclusive_write", 00:32:53.414 "zoned": false, 00:32:53.414 "supported_io_types": { 00:32:53.414 "read": true, 00:32:53.414 "write": true, 00:32:53.414 "unmap": true, 00:32:53.414 "flush": true, 00:32:53.414 "reset": true, 00:32:53.414 "nvme_admin": false, 00:32:53.414 "nvme_io": false, 00:32:53.414 "nvme_io_md": false, 00:32:53.414 "write_zeroes": true, 00:32:53.414 "zcopy": true, 00:32:53.414 "get_zone_info": false, 00:32:53.414 "zone_management": false, 00:32:53.414 "zone_append": false, 00:32:53.414 "compare": false, 00:32:53.414 "compare_and_write": false, 00:32:53.414 "abort": true, 00:32:53.414 "seek_hole": false, 00:32:53.414 "seek_data": false, 00:32:53.414 "copy": true, 00:32:53.414 "nvme_iov_md": false 00:32:53.414 }, 00:32:53.414 "memory_domains": [ 00:32:53.414 { 00:32:53.414 "dma_device_id": "system", 00:32:53.414 "dma_device_type": 1 00:32:53.414 }, 00:32:53.414 { 00:32:53.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:53.414 "dma_device_type": 2 00:32:53.414 } 00:32:53.414 ], 00:32:53.414 "driver_specific": {} 00:32:53.414 }' 00:32:53.414 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:53.414 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:53.414 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:32:53.414 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:53.414 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:53.673 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:53.673 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:53.673 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:53.673 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:32:53.673 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:53.673 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:53.673 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:53.673 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:53.932 [2024-07-25 11:44:09.769531] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.191 11:44:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:54.449 11:44:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:54.449 "name": "Existed_Raid", 00:32:54.449 "uuid": "6399700d-c23e-4872-8d9d-45a073bba5c8", 00:32:54.449 "strip_size_kb": 0, 00:32:54.449 "state": "online", 00:32:54.449 "raid_level": "raid1", 00:32:54.449 "superblock": true, 00:32:54.449 "num_base_bdevs": 2, 00:32:54.449 "num_base_bdevs_discovered": 1, 00:32:54.449 "num_base_bdevs_operational": 1, 00:32:54.449 "base_bdevs_list": [ 00:32:54.449 { 00:32:54.449 "name": null, 00:32:54.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.449 "is_configured": false, 00:32:54.449 "data_offset": 256, 00:32:54.449 "data_size": 7936 00:32:54.449 }, 00:32:54.449 { 00:32:54.449 "name": "BaseBdev2", 00:32:54.449 "uuid": "fd0b9a0c-e449-4d27-b020-47db1933f003", 00:32:54.449 "is_configured": true, 00:32:54.449 "data_offset": 256, 00:32:54.449 "data_size": 7936 00:32:54.449 } 00:32:54.449 ] 00:32:54.449 }' 00:32:54.449 11:44:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:54.449 11:44:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:55.015 11:44:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:55.015 11:44:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:55.015 11:44:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.015 11:44:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:55.273 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:55.273 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:55.273 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:55.531 [2024-07-25 11:44:11.357606] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:55.531 [2024-07-25 11:44:11.357781] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:55.790 [2024-07-25 11:44:11.443822] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:55.790 [2024-07-25 11:44:11.443904] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:55.790 [2024-07-25 11:44:11.443921] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:55.790 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:55.790 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:55.790 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.790 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 103411 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 103411 ']' 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 103411 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103411 00:32:56.049 killing process with pid 103411 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103411' 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 103411 00:32:56.049 [2024-07-25 11:44:11.715977] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:56.049 11:44:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 103411 00:32:56.049 [2024-07-25 11:44:11.730684] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:57.427 11:44:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:32:57.427 00:32:57.427 real 0m13.490s 00:32:57.427 user 0m23.693s 00:32:57.427 sys 0m1.586s 00:32:57.427 11:44:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.427 11:44:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.427 ************************************ 00:32:57.427 END TEST raid_state_function_test_sb_md_interleaved 00:32:57.427 ************************************ 00:32:57.427 11:44:12 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:32:57.428 11:44:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:57.428 11:44:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:57.428 11:44:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:57.428 ************************************ 00:32:57.428 START TEST raid_superblock_test_md_interleaved 00:32:57.428 ************************************ 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@414 -- # local strip_size 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@427 -- # raid_pid=103780 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@428 -- # waitforlisten 103780 /var/tmp/spdk-raid.sock 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 103780 ']' 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:57.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:57.428 11:44:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.428 [2024-07-25 11:44:13.097719] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:32:57.428 [2024-07-25 11:44:13.098889] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103780 ] 00:32:57.428 [2024-07-25 11:44:13.285190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.686 [2024-07-25 11:44:13.524843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.943 [2024-07-25 11:44:13.729305] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:57.943 [2024-07-25 11:44:13.729367] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:58.511 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:32:58.769 malloc1 00:32:58.769 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:58.769 [2024-07-25 11:44:14.629335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:58.769 [2024-07-25 11:44:14.629662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:58.769 [2024-07-25 11:44:14.629841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:58.769 [2024-07-25 11:44:14.629992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:58.769 [2024-07-25 11:44:14.632668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:58.770 [2024-07-25 11:44:14.632854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:58.770 pt1 00:32:58.770 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:32:58.770 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:58.770 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:32:59.028 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:32:59.028 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:59.028 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:59.028 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:32:59.028 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:59.028 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:32:59.288 malloc2 00:32:59.288 11:44:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:59.288 [2024-07-25 11:44:15.147956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:59.288 [2024-07-25 11:44:15.148055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:59.288 [2024-07-25 11:44:15.148086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:59.288 [2024-07-25 11:44:15.148108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:59.288 [2024-07-25 11:44:15.150568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:59.288 [2024-07-25 11:44:15.150628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:59.288 pt2 00:32:59.548 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:32:59.548 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:59.548 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@445 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:32:59.808 [2024-07-25 11:44:15.440181] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:59.808 [2024-07-25 11:44:15.442732] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:59.808 [2024-07-25 11:44:15.442994] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:59.808 [2024-07-25 11:44:15.443027] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:32:59.809 [2024-07-25 11:44:15.443150] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:59.809 [2024-07-25 11:44:15.443294] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:59.809 [2024-07-25 11:44:15.443310] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:59.809 [2024-07-25 11:44:15.443426] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.809 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.069 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:00.069 "name": "raid_bdev1", 00:33:00.069 "uuid": "326d2a9f-f361-4fce-96fa-7337a88e816b", 00:33:00.069 "strip_size_kb": 0, 00:33:00.069 "state": "online", 00:33:00.069 "raid_level": "raid1", 00:33:00.069 "superblock": true, 00:33:00.069 "num_base_bdevs": 2, 00:33:00.069 "num_base_bdevs_discovered": 2, 00:33:00.069 "num_base_bdevs_operational": 2, 00:33:00.069 "base_bdevs_list": [ 00:33:00.069 { 00:33:00.069 "name": "pt1", 00:33:00.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:00.069 "is_configured": true, 00:33:00.069 "data_offset": 256, 00:33:00.069 "data_size": 7936 00:33:00.069 }, 00:33:00.069 { 00:33:00.069 "name": "pt2", 00:33:00.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:00.069 "is_configured": true, 00:33:00.069 "data_offset": 256, 00:33:00.069 "data_size": 7936 00:33:00.069 } 00:33:00.069 ] 00:33:00.069 }' 00:33:00.069 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:00.069 11:44:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:00.638 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:33:00.638 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:00.638 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:00.638 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:00.638 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:00.638 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:00.638 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:00.638 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:00.896 [2024-07-25 11:44:16.572727] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:00.896 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:00.896 "name": "raid_bdev1", 00:33:00.896 "aliases": [ 00:33:00.896 "326d2a9f-f361-4fce-96fa-7337a88e816b" 00:33:00.896 ], 00:33:00.896 "product_name": "Raid Volume", 00:33:00.896 "block_size": 4128, 00:33:00.896 "num_blocks": 7936, 00:33:00.896 "uuid": "326d2a9f-f361-4fce-96fa-7337a88e816b", 00:33:00.896 "md_size": 32, 00:33:00.896 "md_interleave": true, 00:33:00.896 "dif_type": 0, 00:33:00.896 "assigned_rate_limits": { 00:33:00.896 "rw_ios_per_sec": 0, 00:33:00.896 "rw_mbytes_per_sec": 0, 00:33:00.896 "r_mbytes_per_sec": 0, 00:33:00.896 "w_mbytes_per_sec": 0 00:33:00.896 }, 00:33:00.896 "claimed": false, 00:33:00.896 "zoned": false, 00:33:00.896 "supported_io_types": { 00:33:00.896 "read": true, 00:33:00.896 "write": true, 00:33:00.896 "unmap": false, 00:33:00.896 "flush": false, 00:33:00.896 "reset": true, 00:33:00.896 "nvme_admin": false, 00:33:00.896 "nvme_io": false, 00:33:00.896 "nvme_io_md": false, 00:33:00.896 "write_zeroes": true, 00:33:00.896 "zcopy": false, 00:33:00.896 "get_zone_info": false, 00:33:00.896 "zone_management": false, 00:33:00.896 "zone_append": false, 00:33:00.896 "compare": false, 00:33:00.896 "compare_and_write": false, 00:33:00.896 "abort": false, 00:33:00.896 "seek_hole": false, 00:33:00.896 "seek_data": false, 00:33:00.896 "copy": false, 00:33:00.896 "nvme_iov_md": false 00:33:00.896 }, 00:33:00.896 "memory_domains": [ 00:33:00.896 { 00:33:00.896 "dma_device_id": "system", 00:33:00.896 "dma_device_type": 1 00:33:00.896 }, 00:33:00.896 { 00:33:00.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.896 "dma_device_type": 2 00:33:00.896 }, 00:33:00.896 { 00:33:00.896 "dma_device_id": "system", 00:33:00.896 "dma_device_type": 1 00:33:00.896 }, 00:33:00.896 { 00:33:00.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.896 "dma_device_type": 2 00:33:00.896 } 00:33:00.896 ], 00:33:00.896 "driver_specific": { 00:33:00.896 "raid": { 00:33:00.896 "uuid": "326d2a9f-f361-4fce-96fa-7337a88e816b", 00:33:00.896 "strip_size_kb": 0, 00:33:00.896 "state": "online", 00:33:00.896 "raid_level": "raid1", 00:33:00.896 "superblock": true, 00:33:00.896 "num_base_bdevs": 2, 00:33:00.896 "num_base_bdevs_discovered": 2, 00:33:00.896 "num_base_bdevs_operational": 2, 00:33:00.897 "base_bdevs_list": [ 00:33:00.897 { 00:33:00.897 "name": "pt1", 00:33:00.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:00.897 "is_configured": true, 00:33:00.897 "data_offset": 256, 00:33:00.897 "data_size": 7936 00:33:00.897 }, 00:33:00.897 { 00:33:00.897 "name": "pt2", 00:33:00.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:00.897 "is_configured": true, 00:33:00.897 "data_offset": 256, 00:33:00.897 "data_size": 7936 00:33:00.897 } 00:33:00.897 ] 00:33:00.897 } 00:33:00.897 } 00:33:00.897 }' 00:33:00.897 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:00.897 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:00.897 pt2' 00:33:00.897 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:00.897 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:00.897 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:01.155 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:01.155 "name": "pt1", 00:33:01.155 "aliases": [ 00:33:01.155 "00000000-0000-0000-0000-000000000001" 00:33:01.155 ], 00:33:01.155 "product_name": "passthru", 00:33:01.155 "block_size": 4128, 00:33:01.155 "num_blocks": 8192, 00:33:01.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:01.155 "md_size": 32, 00:33:01.155 "md_interleave": true, 00:33:01.155 "dif_type": 0, 00:33:01.155 "assigned_rate_limits": { 00:33:01.155 "rw_ios_per_sec": 0, 00:33:01.155 "rw_mbytes_per_sec": 0, 00:33:01.155 "r_mbytes_per_sec": 0, 00:33:01.155 "w_mbytes_per_sec": 0 00:33:01.155 }, 00:33:01.155 "claimed": true, 00:33:01.155 "claim_type": "exclusive_write", 00:33:01.155 "zoned": false, 00:33:01.155 "supported_io_types": { 00:33:01.155 "read": true, 00:33:01.155 "write": true, 00:33:01.155 "unmap": true, 00:33:01.155 "flush": true, 00:33:01.155 "reset": true, 00:33:01.155 "nvme_admin": false, 00:33:01.155 "nvme_io": false, 00:33:01.155 "nvme_io_md": false, 00:33:01.155 "write_zeroes": true, 00:33:01.155 "zcopy": true, 00:33:01.155 "get_zone_info": false, 00:33:01.155 "zone_management": false, 00:33:01.155 "zone_append": false, 00:33:01.155 "compare": false, 00:33:01.155 "compare_and_write": false, 00:33:01.155 "abort": true, 00:33:01.155 "seek_hole": false, 00:33:01.155 "seek_data": false, 00:33:01.155 "copy": true, 00:33:01.155 "nvme_iov_md": false 00:33:01.155 }, 00:33:01.155 "memory_domains": [ 00:33:01.155 { 00:33:01.155 "dma_device_id": "system", 00:33:01.155 "dma_device_type": 1 00:33:01.155 }, 00:33:01.155 { 00:33:01.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.155 "dma_device_type": 2 00:33:01.155 } 00:33:01.155 ], 00:33:01.155 "driver_specific": { 00:33:01.155 "passthru": { 00:33:01.155 "name": "pt1", 00:33:01.155 "base_bdev_name": "malloc1" 00:33:01.155 } 00:33:01.155 } 00:33:01.155 }' 00:33:01.155 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.155 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.155 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:01.155 11:44:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:01.155 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:01.414 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:01.414 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:01.414 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:01.414 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:01.414 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:01.414 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:01.414 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:01.414 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:01.414 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:01.414 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:01.982 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:01.982 "name": "pt2", 00:33:01.982 "aliases": [ 00:33:01.982 "00000000-0000-0000-0000-000000000002" 00:33:01.982 ], 00:33:01.982 "product_name": "passthru", 00:33:01.982 "block_size": 4128, 00:33:01.982 "num_blocks": 8192, 00:33:01.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:01.982 "md_size": 32, 00:33:01.982 "md_interleave": true, 00:33:01.982 "dif_type": 0, 00:33:01.982 "assigned_rate_limits": { 00:33:01.982 "rw_ios_per_sec": 0, 00:33:01.982 "rw_mbytes_per_sec": 0, 00:33:01.982 "r_mbytes_per_sec": 0, 00:33:01.982 "w_mbytes_per_sec": 0 00:33:01.982 }, 00:33:01.982 "claimed": true, 00:33:01.982 "claim_type": "exclusive_write", 00:33:01.982 "zoned": false, 00:33:01.982 "supported_io_types": { 00:33:01.982 "read": true, 00:33:01.982 "write": true, 00:33:01.982 "unmap": true, 00:33:01.982 "flush": true, 00:33:01.982 "reset": true, 00:33:01.982 "nvme_admin": false, 00:33:01.982 "nvme_io": false, 00:33:01.982 "nvme_io_md": false, 00:33:01.982 "write_zeroes": true, 00:33:01.982 "zcopy": true, 00:33:01.982 "get_zone_info": false, 00:33:01.982 "zone_management": false, 00:33:01.982 "zone_append": false, 00:33:01.982 "compare": false, 00:33:01.982 "compare_and_write": false, 00:33:01.982 "abort": true, 00:33:01.982 "seek_hole": false, 00:33:01.982 "seek_data": false, 00:33:01.982 "copy": true, 00:33:01.982 "nvme_iov_md": false 00:33:01.982 }, 00:33:01.982 "memory_domains": [ 00:33:01.982 { 00:33:01.982 "dma_device_id": "system", 00:33:01.982 "dma_device_type": 1 00:33:01.982 }, 00:33:01.982 { 00:33:01.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.982 "dma_device_type": 2 00:33:01.982 } 00:33:01.982 ], 00:33:01.982 "driver_specific": { 00:33:01.982 "passthru": { 00:33:01.982 "name": "pt2", 00:33:01.982 "base_bdev_name": "malloc2" 00:33:01.982 } 00:33:01.982 } 00:33:01.982 }' 00:33:01.982 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.982 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.982 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:01.982 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:01.982 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:01.982 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:01.982 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:01.982 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:02.240 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:02.240 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:02.240 11:44:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:02.240 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:02.240 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:33:02.240 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:02.498 [2024-07-25 11:44:18.273281] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:02.498 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=326d2a9f-f361-4fce-96fa-7337a88e816b 00:33:02.498 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' -z 326d2a9f-f361-4fce-96fa-7337a88e816b ']' 00:33:02.498 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:02.770 [2024-07-25 11:44:18.596942] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:02.770 [2024-07-25 11:44:18.597002] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:02.770 [2024-07-25 11:44:18.597109] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:02.770 [2024-07-25 11:44:18.597197] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:02.770 [2024-07-25 11:44:18.597214] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:02.770 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.770 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:33:03.053 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:33:03.053 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:33:03.053 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:33:03.053 11:44:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:03.312 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:33:03.312 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:03.574 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:03.574 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@472 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:03.834 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:04.093 [2024-07-25 11:44:19.957258] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:04.093 [2024-07-25 11:44:19.959739] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:04.093 [2024-07-25 11:44:19.959844] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:04.093 [2024-07-25 11:44:19.959921] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:04.093 [2024-07-25 11:44:19.959952] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:04.093 [2024-07-25 11:44:19.959965] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:04.093 request: 00:33:04.093 { 00:33:04.093 "name": "raid_bdev1", 00:33:04.093 "raid_level": "raid1", 00:33:04.093 "base_bdevs": [ 00:33:04.093 "malloc1", 00:33:04.093 "malloc2" 00:33:04.094 ], 00:33:04.094 "superblock": false, 00:33:04.094 "method": "bdev_raid_create", 00:33:04.094 "req_id": 1 00:33:04.094 } 00:33:04.094 Got JSON-RPC error response 00:33:04.094 response: 00:33:04.094 { 00:33:04.094 "code": -17, 00:33:04.094 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:04.094 } 00:33:04.352 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:33:04.352 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:04.352 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:04.352 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:04.352 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:33:04.353 11:44:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.353 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:33:04.353 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:33:04.353 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@480 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:04.612 [2024-07-25 11:44:20.461291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:04.612 [2024-07-25 11:44:20.461410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.612 [2024-07-25 11:44:20.461444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:04.612 [2024-07-25 11:44:20.461459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.612 [2024-07-25 11:44:20.463934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.612 [2024-07-25 11:44:20.463976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:04.612 [2024-07-25 11:44:20.464072] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:04.612 [2024-07-25 11:44:20.464140] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:04.612 pt1 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.612 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.178 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:05.178 "name": "raid_bdev1", 00:33:05.178 "uuid": "326d2a9f-f361-4fce-96fa-7337a88e816b", 00:33:05.178 "strip_size_kb": 0, 00:33:05.178 "state": "configuring", 00:33:05.178 "raid_level": "raid1", 00:33:05.178 "superblock": true, 00:33:05.178 "num_base_bdevs": 2, 00:33:05.178 "num_base_bdevs_discovered": 1, 00:33:05.178 "num_base_bdevs_operational": 2, 00:33:05.178 "base_bdevs_list": [ 00:33:05.178 { 00:33:05.178 "name": "pt1", 00:33:05.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:05.178 "is_configured": true, 00:33:05.178 "data_offset": 256, 00:33:05.178 "data_size": 7936 00:33:05.178 }, 00:33:05.178 { 00:33:05.178 "name": null, 00:33:05.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:05.178 "is_configured": false, 00:33:05.178 "data_offset": 256, 00:33:05.178 "data_size": 7936 00:33:05.178 } 00:33:05.178 ] 00:33:05.178 }' 00:33:05.178 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:05.178 11:44:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.746 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:33:05.746 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:33:05.746 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:33:05.746 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@494 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:06.005 [2024-07-25 11:44:21.629559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:06.005 [2024-07-25 11:44:21.629675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:06.005 [2024-07-25 11:44:21.629715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:06.005 [2024-07-25 11:44:21.629731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:06.005 [2024-07-25 11:44:21.629973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:06.005 [2024-07-25 11:44:21.629994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:06.005 [2024-07-25 11:44:21.630064] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:06.005 [2024-07-25 11:44:21.630092] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:06.005 [2024-07-25 11:44:21.630227] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:06.005 [2024-07-25 11:44:21.630242] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:06.005 [2024-07-25 11:44:21.630328] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:06.005 [2024-07-25 11:44:21.630408] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:06.005 [2024-07-25 11:44:21.630426] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:06.005 [2024-07-25 11:44:21.630497] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:06.005 pt2 00:33:06.005 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:33:06.005 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:33:06.005 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:06.005 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:06.005 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:06.005 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:06.006 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:06.006 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:06.006 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:06.006 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:06.006 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:06.006 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:06.006 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.006 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.265 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:06.265 "name": "raid_bdev1", 00:33:06.265 "uuid": "326d2a9f-f361-4fce-96fa-7337a88e816b", 00:33:06.265 "strip_size_kb": 0, 00:33:06.265 "state": "online", 00:33:06.265 "raid_level": "raid1", 00:33:06.265 "superblock": true, 00:33:06.265 "num_base_bdevs": 2, 00:33:06.265 "num_base_bdevs_discovered": 2, 00:33:06.265 "num_base_bdevs_operational": 2, 00:33:06.265 "base_bdevs_list": [ 00:33:06.265 { 00:33:06.265 "name": "pt1", 00:33:06.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:06.265 "is_configured": true, 00:33:06.265 "data_offset": 256, 00:33:06.265 "data_size": 7936 00:33:06.265 }, 00:33:06.265 { 00:33:06.265 "name": "pt2", 00:33:06.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:06.265 "is_configured": true, 00:33:06.265 "data_offset": 256, 00:33:06.265 "data_size": 7936 00:33:06.265 } 00:33:06.265 ] 00:33:06.265 }' 00:33:06.265 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:06.265 11:44:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.833 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:33:06.833 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:06.833 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:06.833 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:06.833 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:06.833 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:06.833 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:06.833 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:07.093 [2024-07-25 11:44:22.866249] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:07.093 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:07.093 "name": "raid_bdev1", 00:33:07.093 "aliases": [ 00:33:07.093 "326d2a9f-f361-4fce-96fa-7337a88e816b" 00:33:07.093 ], 00:33:07.093 "product_name": "Raid Volume", 00:33:07.093 "block_size": 4128, 00:33:07.093 "num_blocks": 7936, 00:33:07.093 "uuid": "326d2a9f-f361-4fce-96fa-7337a88e816b", 00:33:07.093 "md_size": 32, 00:33:07.093 "md_interleave": true, 00:33:07.093 "dif_type": 0, 00:33:07.093 "assigned_rate_limits": { 00:33:07.093 "rw_ios_per_sec": 0, 00:33:07.093 "rw_mbytes_per_sec": 0, 00:33:07.093 "r_mbytes_per_sec": 0, 00:33:07.093 "w_mbytes_per_sec": 0 00:33:07.093 }, 00:33:07.093 "claimed": false, 00:33:07.093 "zoned": false, 00:33:07.093 "supported_io_types": { 00:33:07.093 "read": true, 00:33:07.093 "write": true, 00:33:07.093 "unmap": false, 00:33:07.093 "flush": false, 00:33:07.093 "reset": true, 00:33:07.093 "nvme_admin": false, 00:33:07.093 "nvme_io": false, 00:33:07.093 "nvme_io_md": false, 00:33:07.093 "write_zeroes": true, 00:33:07.093 "zcopy": false, 00:33:07.093 "get_zone_info": false, 00:33:07.093 "zone_management": false, 00:33:07.093 "zone_append": false, 00:33:07.093 "compare": false, 00:33:07.093 "compare_and_write": false, 00:33:07.093 "abort": false, 00:33:07.093 "seek_hole": false, 00:33:07.093 "seek_data": false, 00:33:07.093 "copy": false, 00:33:07.093 "nvme_iov_md": false 00:33:07.093 }, 00:33:07.093 "memory_domains": [ 00:33:07.093 { 00:33:07.093 "dma_device_id": "system", 00:33:07.093 "dma_device_type": 1 00:33:07.093 }, 00:33:07.093 { 00:33:07.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.093 "dma_device_type": 2 00:33:07.093 }, 00:33:07.093 { 00:33:07.093 "dma_device_id": "system", 00:33:07.093 "dma_device_type": 1 00:33:07.093 }, 00:33:07.093 { 00:33:07.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.093 "dma_device_type": 2 00:33:07.093 } 00:33:07.093 ], 00:33:07.093 "driver_specific": { 00:33:07.093 "raid": { 00:33:07.093 "uuid": "326d2a9f-f361-4fce-96fa-7337a88e816b", 00:33:07.093 "strip_size_kb": 0, 00:33:07.093 "state": "online", 00:33:07.093 "raid_level": "raid1", 00:33:07.093 "superblock": true, 00:33:07.093 "num_base_bdevs": 2, 00:33:07.093 "num_base_bdevs_discovered": 2, 00:33:07.093 "num_base_bdevs_operational": 2, 00:33:07.093 "base_bdevs_list": [ 00:33:07.093 { 00:33:07.093 "name": "pt1", 00:33:07.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:07.093 "is_configured": true, 00:33:07.093 "data_offset": 256, 00:33:07.093 "data_size": 7936 00:33:07.093 }, 00:33:07.093 { 00:33:07.093 "name": "pt2", 00:33:07.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:07.093 "is_configured": true, 00:33:07.093 "data_offset": 256, 00:33:07.093 "data_size": 7936 00:33:07.093 } 00:33:07.093 ] 00:33:07.093 } 00:33:07.093 } 00:33:07.093 }' 00:33:07.093 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:07.093 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:07.093 pt2' 00:33:07.093 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:07.093 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:07.093 11:44:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:07.352 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:07.352 "name": "pt1", 00:33:07.352 "aliases": [ 00:33:07.352 "00000000-0000-0000-0000-000000000001" 00:33:07.352 ], 00:33:07.352 "product_name": "passthru", 00:33:07.352 "block_size": 4128, 00:33:07.352 "num_blocks": 8192, 00:33:07.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:07.352 "md_size": 32, 00:33:07.352 "md_interleave": true, 00:33:07.353 "dif_type": 0, 00:33:07.353 "assigned_rate_limits": { 00:33:07.353 "rw_ios_per_sec": 0, 00:33:07.353 "rw_mbytes_per_sec": 0, 00:33:07.353 "r_mbytes_per_sec": 0, 00:33:07.353 "w_mbytes_per_sec": 0 00:33:07.353 }, 00:33:07.353 "claimed": true, 00:33:07.353 "claim_type": "exclusive_write", 00:33:07.353 "zoned": false, 00:33:07.353 "supported_io_types": { 00:33:07.353 "read": true, 00:33:07.353 "write": true, 00:33:07.353 "unmap": true, 00:33:07.353 "flush": true, 00:33:07.353 "reset": true, 00:33:07.353 "nvme_admin": false, 00:33:07.353 "nvme_io": false, 00:33:07.353 "nvme_io_md": false, 00:33:07.353 "write_zeroes": true, 00:33:07.353 "zcopy": true, 00:33:07.353 "get_zone_info": false, 00:33:07.353 "zone_management": false, 00:33:07.353 "zone_append": false, 00:33:07.353 "compare": false, 00:33:07.353 "compare_and_write": false, 00:33:07.353 "abort": true, 00:33:07.353 "seek_hole": false, 00:33:07.353 "seek_data": false, 00:33:07.353 "copy": true, 00:33:07.353 "nvme_iov_md": false 00:33:07.353 }, 00:33:07.353 "memory_domains": [ 00:33:07.353 { 00:33:07.353 "dma_device_id": "system", 00:33:07.353 "dma_device_type": 1 00:33:07.353 }, 00:33:07.353 { 00:33:07.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.353 "dma_device_type": 2 00:33:07.353 } 00:33:07.353 ], 00:33:07.353 "driver_specific": { 00:33:07.353 "passthru": { 00:33:07.353 "name": "pt1", 00:33:07.353 "base_bdev_name": "malloc1" 00:33:07.353 } 00:33:07.353 } 00:33:07.353 }' 00:33:07.353 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:07.611 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:07.611 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:07.611 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:07.612 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:07.612 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:07.612 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:07.612 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:07.869 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:07.869 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:07.869 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:07.869 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:07.869 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:07.869 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:07.869 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:08.127 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:08.127 "name": "pt2", 00:33:08.127 "aliases": [ 00:33:08.127 "00000000-0000-0000-0000-000000000002" 00:33:08.127 ], 00:33:08.127 "product_name": "passthru", 00:33:08.127 "block_size": 4128, 00:33:08.127 "num_blocks": 8192, 00:33:08.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:08.127 "md_size": 32, 00:33:08.127 "md_interleave": true, 00:33:08.127 "dif_type": 0, 00:33:08.127 "assigned_rate_limits": { 00:33:08.127 "rw_ios_per_sec": 0, 00:33:08.127 "rw_mbytes_per_sec": 0, 00:33:08.127 "r_mbytes_per_sec": 0, 00:33:08.127 "w_mbytes_per_sec": 0 00:33:08.127 }, 00:33:08.127 "claimed": true, 00:33:08.127 "claim_type": "exclusive_write", 00:33:08.127 "zoned": false, 00:33:08.127 "supported_io_types": { 00:33:08.127 "read": true, 00:33:08.127 "write": true, 00:33:08.127 "unmap": true, 00:33:08.127 "flush": true, 00:33:08.127 "reset": true, 00:33:08.127 "nvme_admin": false, 00:33:08.127 "nvme_io": false, 00:33:08.127 "nvme_io_md": false, 00:33:08.127 "write_zeroes": true, 00:33:08.127 "zcopy": true, 00:33:08.127 "get_zone_info": false, 00:33:08.127 "zone_management": false, 00:33:08.127 "zone_append": false, 00:33:08.127 "compare": false, 00:33:08.127 "compare_and_write": false, 00:33:08.127 "abort": true, 00:33:08.127 "seek_hole": false, 00:33:08.127 "seek_data": false, 00:33:08.127 "copy": true, 00:33:08.127 "nvme_iov_md": false 00:33:08.127 }, 00:33:08.127 "memory_domains": [ 00:33:08.127 { 00:33:08.127 "dma_device_id": "system", 00:33:08.127 "dma_device_type": 1 00:33:08.127 }, 00:33:08.127 { 00:33:08.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:08.127 "dma_device_type": 2 00:33:08.127 } 00:33:08.127 ], 00:33:08.127 "driver_specific": { 00:33:08.127 "passthru": { 00:33:08.127 "name": "pt2", 00:33:08.127 "base_bdev_name": "malloc2" 00:33:08.127 } 00:33:08.127 } 00:33:08.127 }' 00:33:08.127 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:08.127 11:44:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:08.386 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:08.386 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:08.386 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:08.386 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:08.386 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:08.386 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:08.386 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:08.386 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:08.644 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:08.644 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:08.644 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:08.644 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:33:08.903 [2024-07-25 11:44:24.642699] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:08.903 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # '[' 326d2a9f-f361-4fce-96fa-7337a88e816b '!=' 326d2a9f-f361-4fce-96fa-7337a88e816b ']' 00:33:08.903 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:33:08.903 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:08.903 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:33:08.903 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@508 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:09.162 [2024-07-25 11:44:24.922502] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.162 11:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.420 11:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:09.420 "name": "raid_bdev1", 00:33:09.420 "uuid": "326d2a9f-f361-4fce-96fa-7337a88e816b", 00:33:09.420 "strip_size_kb": 0, 00:33:09.420 "state": "online", 00:33:09.420 "raid_level": "raid1", 00:33:09.421 "superblock": true, 00:33:09.421 "num_base_bdevs": 2, 00:33:09.421 "num_base_bdevs_discovered": 1, 00:33:09.421 "num_base_bdevs_operational": 1, 00:33:09.421 "base_bdevs_list": [ 00:33:09.421 { 00:33:09.421 "name": null, 00:33:09.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.421 "is_configured": false, 00:33:09.421 "data_offset": 256, 00:33:09.421 "data_size": 7936 00:33:09.421 }, 00:33:09.421 { 00:33:09.421 "name": "pt2", 00:33:09.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:09.421 "is_configured": true, 00:33:09.421 "data_offset": 256, 00:33:09.421 "data_size": 7936 00:33:09.421 } 00:33:09.421 ] 00:33:09.421 }' 00:33:09.421 11:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:09.421 11:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:10.359 11:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@514 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:10.359 [2024-07-25 11:44:26.194754] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:10.359 [2024-07-25 11:44:26.194797] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:10.359 [2024-07-25 11:44:26.194890] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:10.359 [2024-07-25 11:44:26.194964] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:10.359 [2024-07-25 11:44:26.194980] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:10.359 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:10.359 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:33:10.925 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:33:10.925 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:33:10.925 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:33:10.925 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:33:10.925 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:11.183 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:11.183 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:33:11.183 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:33:11.183 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:33:11.183 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@534 -- # i=1 00:33:11.183 11:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@535 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:11.441 [2024-07-25 11:44:27.175002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:11.441 [2024-07-25 11:44:27.175114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:11.441 [2024-07-25 11:44:27.175160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:11.441 [2024-07-25 11:44:27.175178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:11.441 [2024-07-25 11:44:27.177730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:11.441 [2024-07-25 11:44:27.177777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:11.441 [2024-07-25 11:44:27.177867] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:11.441 [2024-07-25 11:44:27.177933] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:11.441 [2024-07-25 11:44:27.178052] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:11.441 [2024-07-25 11:44:27.178067] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:11.441 [2024-07-25 11:44:27.178161] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:11.441 [2024-07-25 11:44:27.178306] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:11.441 [2024-07-25 11:44:27.178336] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:11.441 [2024-07-25 11:44:27.178423] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:11.441 pt2 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.441 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:11.698 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:11.698 "name": "raid_bdev1", 00:33:11.698 "uuid": "326d2a9f-f361-4fce-96fa-7337a88e816b", 00:33:11.698 "strip_size_kb": 0, 00:33:11.698 "state": "online", 00:33:11.698 "raid_level": "raid1", 00:33:11.698 "superblock": true, 00:33:11.698 "num_base_bdevs": 2, 00:33:11.698 "num_base_bdevs_discovered": 1, 00:33:11.698 "num_base_bdevs_operational": 1, 00:33:11.698 "base_bdevs_list": [ 00:33:11.698 { 00:33:11.698 "name": null, 00:33:11.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.698 "is_configured": false, 00:33:11.698 "data_offset": 256, 00:33:11.698 "data_size": 7936 00:33:11.698 }, 00:33:11.698 { 00:33:11.698 "name": "pt2", 00:33:11.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:11.698 "is_configured": true, 00:33:11.698 "data_offset": 256, 00:33:11.698 "data_size": 7936 00:33:11.698 } 00:33:11.698 ] 00:33:11.698 }' 00:33:11.698 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:11.698 11:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:12.631 11:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:12.631 [2024-07-25 11:44:28.487319] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:12.631 [2024-07-25 11:44:28.487362] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:12.631 [2024-07-25 11:44:28.487465] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:12.631 [2024-07-25 11:44:28.487534] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:12.631 [2024-07-25 11:44:28.487554] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:12.631 11:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.631 11:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:33:13.246 11:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:33:13.246 11:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:33:13.246 11:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:33:13.246 11:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:13.246 [2024-07-25 11:44:29.079504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:13.246 [2024-07-25 11:44:29.079614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:13.246 [2024-07-25 11:44:29.079677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:13.246 [2024-07-25 11:44:29.079696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:13.246 [2024-07-25 11:44:29.082270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:13.246 [2024-07-25 11:44:29.082320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:13.246 [2024-07-25 11:44:29.082397] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:13.246 [2024-07-25 11:44:29.082474] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:13.246 [2024-07-25 11:44:29.082628] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:13.246 [2024-07-25 11:44:29.082664] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:13.246 [2024-07-25 11:44:29.082700] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:33:13.246 [2024-07-25 11:44:29.082785] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:13.246 [2024-07-25 11:44:29.082887] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:33:13.246 [2024-07-25 11:44:29.082908] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:13.246 [2024-07-25 11:44:29.082993] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:13.246 [2024-07-25 11:44:29.083099] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:33:13.246 [2024-07-25 11:44:29.083114] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:33:13.246 [2024-07-25 11:44:29.083200] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:13.246 pt1 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.246 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.505 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:13.505 "name": "raid_bdev1", 00:33:13.505 "uuid": "326d2a9f-f361-4fce-96fa-7337a88e816b", 00:33:13.505 "strip_size_kb": 0, 00:33:13.505 "state": "online", 00:33:13.505 "raid_level": "raid1", 00:33:13.505 "superblock": true, 00:33:13.505 "num_base_bdevs": 2, 00:33:13.505 "num_base_bdevs_discovered": 1, 00:33:13.505 "num_base_bdevs_operational": 1, 00:33:13.505 "base_bdevs_list": [ 00:33:13.505 { 00:33:13.505 "name": null, 00:33:13.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.505 "is_configured": false, 00:33:13.505 "data_offset": 256, 00:33:13.505 "data_size": 7936 00:33:13.505 }, 00:33:13.505 { 00:33:13.505 "name": "pt2", 00:33:13.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:13.505 "is_configured": true, 00:33:13.505 "data_offset": 256, 00:33:13.505 "data_size": 7936 00:33:13.505 } 00:33:13.505 ] 00:33:13.505 }' 00:33:13.505 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:13.505 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:14.444 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:14.444 11:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:14.444 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:33:14.444 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:33:14.444 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:14.703 [2024-07-25 11:44:30.434410] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # '[' 326d2a9f-f361-4fce-96fa-7337a88e816b '!=' 326d2a9f-f361-4fce-96fa-7337a88e816b ']' 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@578 -- # killprocess 103780 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 103780 ']' 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 103780 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 103780 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:14.703 killing process with pid 103780 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 103780' 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 103780 00:33:14.703 [2024-07-25 11:44:30.483267] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:14.703 11:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 103780 00:33:14.703 [2024-07-25 11:44:30.483378] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:14.703 [2024-07-25 11:44:30.483456] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:14.703 [2024-07-25 11:44:30.483473] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:33:14.961 [2024-07-25 11:44:30.677984] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:16.334 11:44:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@580 -- # return 0 00:33:16.334 00:33:16.334 real 0m18.872s 00:33:16.334 user 0m34.008s 00:33:16.334 sys 0m2.470s 00:33:16.334 11:44:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:16.334 ************************************ 00:33:16.334 END TEST raid_superblock_test_md_interleaved 00:33:16.334 ************************************ 00:33:16.334 11:44:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.334 11:44:31 bdev_raid -- bdev/bdev_raid.sh@992 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:33:16.334 11:44:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:33:16.334 11:44:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:16.334 11:44:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:16.334 ************************************ 00:33:16.334 START TEST raid_rebuild_test_sb_md_interleaved 00:33:16.334 ************************************ 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # local verify=false 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:33:16.334 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # local strip_size 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # local create_arg 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # local data_offset 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # raid_pid=104293 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # waitforlisten 104293 /var/tmp/spdk-raid.sock 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 104293 ']' 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:16.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:16.335 11:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.335 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:16.335 Zero copy mechanism will not be used. 00:33:16.335 [2024-07-25 11:44:32.043249] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:33:16.335 [2024-07-25 11:44:32.043436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104293 ] 00:33:16.593 [2024-07-25 11:44:32.246255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.852 [2024-07-25 11:44:32.494048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.852 [2024-07-25 11:44:32.697852] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:16.852 [2024-07-25 11:44:32.697927] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:17.418 11:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:17.418 11:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:33:17.418 11:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:33:17.418 11:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:33:17.418 BaseBdev1_malloc 00:33:17.675 11:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:17.675 [2024-07-25 11:44:33.541128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:17.675 [2024-07-25 11:44:33.541251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:17.675 [2024-07-25 11:44:33.541289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:17.675 [2024-07-25 11:44:33.541305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:17.675 [2024-07-25 11:44:33.544329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:17.675 [2024-07-25 11:44:33.544388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:17.675 BaseBdev1 00:33:17.933 11:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:33:17.933 11:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:33:18.191 BaseBdev2_malloc 00:33:18.191 11:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:18.449 [2024-07-25 11:44:34.220214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:18.449 [2024-07-25 11:44:34.220541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:18.449 [2024-07-25 11:44:34.220638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:18.449 [2024-07-25 11:44:34.220901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:18.449 [2024-07-25 11:44:34.223311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:18.449 [2024-07-25 11:44:34.223525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:18.449 BaseBdev2 00:33:18.449 11:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:33:18.707 spare_malloc 00:33:18.707 11:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:18.966 spare_delay 00:33:18.966 11:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:19.225 [2024-07-25 11:44:34.996603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:19.225 [2024-07-25 11:44:34.996721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:19.225 [2024-07-25 11:44:34.996801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:19.225 [2024-07-25 11:44:34.996819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:19.225 [2024-07-25 11:44:34.999560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:19.225 [2024-07-25 11:44:34.999656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:19.225 spare 00:33:19.225 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@627 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:33:19.483 [2024-07-25 11:44:35.272741] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:19.483 [2024-07-25 11:44:35.275152] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:19.483 [2024-07-25 11:44:35.275446] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:19.483 [2024-07-25 11:44:35.275466] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:19.483 [2024-07-25 11:44:35.275596] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:19.483 [2024-07-25 11:44:35.275727] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:19.483 [2024-07-25 11:44:35.275750] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:19.483 [2024-07-25 11:44:35.275864] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.483 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.741 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:19.741 "name": "raid_bdev1", 00:33:19.741 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:19.741 "strip_size_kb": 0, 00:33:19.741 "state": "online", 00:33:19.741 "raid_level": "raid1", 00:33:19.741 "superblock": true, 00:33:19.741 "num_base_bdevs": 2, 00:33:19.741 "num_base_bdevs_discovered": 2, 00:33:19.741 "num_base_bdevs_operational": 2, 00:33:19.741 "base_bdevs_list": [ 00:33:19.741 { 00:33:19.741 "name": "BaseBdev1", 00:33:19.741 "uuid": "0db2fa17-0264-5e3f-955d-db731adb1aab", 00:33:19.741 "is_configured": true, 00:33:19.741 "data_offset": 256, 00:33:19.741 "data_size": 7936 00:33:19.741 }, 00:33:19.741 { 00:33:19.741 "name": "BaseBdev2", 00:33:19.741 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:19.741 "is_configured": true, 00:33:19.741 "data_offset": 256, 00:33:19.741 "data_size": 7936 00:33:19.741 } 00:33:19.741 ] 00:33:19.741 }' 00:33:19.741 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:19.741 11:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:20.676 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:20.676 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:33:20.676 [2024-07-25 11:44:36.437395] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:20.676 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:33:20.676 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.676 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:20.935 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:33:20.935 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:33:20.935 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # '[' false = true ']' 00:33:20.935 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:21.193 [2024-07-25 11:44:36.945178] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.193 11:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.451 11:44:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:21.451 "name": "raid_bdev1", 00:33:21.451 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:21.451 "strip_size_kb": 0, 00:33:21.451 "state": "online", 00:33:21.451 "raid_level": "raid1", 00:33:21.451 "superblock": true, 00:33:21.451 "num_base_bdevs": 2, 00:33:21.451 "num_base_bdevs_discovered": 1, 00:33:21.451 "num_base_bdevs_operational": 1, 00:33:21.451 "base_bdevs_list": [ 00:33:21.451 { 00:33:21.451 "name": null, 00:33:21.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.451 "is_configured": false, 00:33:21.451 "data_offset": 256, 00:33:21.451 "data_size": 7936 00:33:21.451 }, 00:33:21.451 { 00:33:21.451 "name": "BaseBdev2", 00:33:21.452 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:21.452 "is_configured": true, 00:33:21.452 "data_offset": 256, 00:33:21.452 "data_size": 7936 00:33:21.452 } 00:33:21.452 ] 00:33:21.452 }' 00:33:21.452 11:44:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:21.452 11:44:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:22.018 11:44:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:22.277 [2024-07-25 11:44:38.097669] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:22.277 [2024-07-25 11:44:38.116954] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:22.277 [2024-07-25 11:44:38.119833] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:22.277 11:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:23.285 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.285 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:23.285 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:23.285 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:23.285 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:23.285 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.286 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.544 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:23.544 "name": "raid_bdev1", 00:33:23.544 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:23.544 "strip_size_kb": 0, 00:33:23.544 "state": "online", 00:33:23.544 "raid_level": "raid1", 00:33:23.544 "superblock": true, 00:33:23.544 "num_base_bdevs": 2, 00:33:23.544 "num_base_bdevs_discovered": 2, 00:33:23.544 "num_base_bdevs_operational": 2, 00:33:23.544 "process": { 00:33:23.544 "type": "rebuild", 00:33:23.544 "target": "spare", 00:33:23.544 "progress": { 00:33:23.544 "blocks": 3072, 00:33:23.544 "percent": 38 00:33:23.544 } 00:33:23.544 }, 00:33:23.544 "base_bdevs_list": [ 00:33:23.544 { 00:33:23.544 "name": "spare", 00:33:23.544 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:23.544 "is_configured": true, 00:33:23.544 "data_offset": 256, 00:33:23.544 "data_size": 7936 00:33:23.544 }, 00:33:23.544 { 00:33:23.544 "name": "BaseBdev2", 00:33:23.544 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:23.544 "is_configured": true, 00:33:23.544 "data_offset": 256, 00:33:23.544 "data_size": 7936 00:33:23.544 } 00:33:23.544 ] 00:33:23.544 }' 00:33:23.544 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:23.804 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:23.804 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:23.804 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:23.804 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:24.063 [2024-07-25 11:44:39.781420] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:24.063 [2024-07-25 11:44:39.832796] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:24.063 [2024-07-25 11:44:39.833099] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:24.063 [2024-07-25 11:44:39.833367] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:24.063 [2024-07-25 11:44:39.833423] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.063 11:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.322 11:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:24.322 "name": "raid_bdev1", 00:33:24.322 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:24.322 "strip_size_kb": 0, 00:33:24.322 "state": "online", 00:33:24.322 "raid_level": "raid1", 00:33:24.322 "superblock": true, 00:33:24.322 "num_base_bdevs": 2, 00:33:24.322 "num_base_bdevs_discovered": 1, 00:33:24.322 "num_base_bdevs_operational": 1, 00:33:24.322 "base_bdevs_list": [ 00:33:24.322 { 00:33:24.322 "name": null, 00:33:24.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.322 "is_configured": false, 00:33:24.322 "data_offset": 256, 00:33:24.322 "data_size": 7936 00:33:24.322 }, 00:33:24.322 { 00:33:24.322 "name": "BaseBdev2", 00:33:24.322 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:24.322 "is_configured": true, 00:33:24.322 "data_offset": 256, 00:33:24.322 "data_size": 7936 00:33:24.322 } 00:33:24.322 ] 00:33:24.322 }' 00:33:24.322 11:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:24.322 11:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:25.259 11:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:25.259 11:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:25.259 11:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:25.259 11:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:25.259 11:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:25.259 11:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:25.259 11:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.259 11:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:25.259 "name": "raid_bdev1", 00:33:25.259 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:25.259 "strip_size_kb": 0, 00:33:25.259 "state": "online", 00:33:25.259 "raid_level": "raid1", 00:33:25.259 "superblock": true, 00:33:25.259 "num_base_bdevs": 2, 00:33:25.259 "num_base_bdevs_discovered": 1, 00:33:25.259 "num_base_bdevs_operational": 1, 00:33:25.259 "base_bdevs_list": [ 00:33:25.259 { 00:33:25.259 "name": null, 00:33:25.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:25.259 "is_configured": false, 00:33:25.259 "data_offset": 256, 00:33:25.259 "data_size": 7936 00:33:25.259 }, 00:33:25.259 { 00:33:25.259 "name": "BaseBdev2", 00:33:25.259 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:25.259 "is_configured": true, 00:33:25.259 "data_offset": 256, 00:33:25.259 "data_size": 7936 00:33:25.259 } 00:33:25.259 ] 00:33:25.259 }' 00:33:25.259 11:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:25.518 11:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:25.518 11:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:25.518 11:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:25.518 11:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@677 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:25.777 [2024-07-25 11:44:41.447593] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:25.777 [2024-07-25 11:44:41.463067] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:25.777 [2024-07-25 11:44:41.465518] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:25.777 11:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@678 -- # sleep 1 00:33:26.714 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:26.714 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:26.714 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:26.714 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:26.714 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:26.714 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.714 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:26.995 "name": "raid_bdev1", 00:33:26.995 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:26.995 "strip_size_kb": 0, 00:33:26.995 "state": "online", 00:33:26.995 "raid_level": "raid1", 00:33:26.995 "superblock": true, 00:33:26.995 "num_base_bdevs": 2, 00:33:26.995 "num_base_bdevs_discovered": 2, 00:33:26.995 "num_base_bdevs_operational": 2, 00:33:26.995 "process": { 00:33:26.995 "type": "rebuild", 00:33:26.995 "target": "spare", 00:33:26.995 "progress": { 00:33:26.995 "blocks": 3072, 00:33:26.995 "percent": 38 00:33:26.995 } 00:33:26.995 }, 00:33:26.995 "base_bdevs_list": [ 00:33:26.995 { 00:33:26.995 "name": "spare", 00:33:26.995 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:26.995 "is_configured": true, 00:33:26.995 "data_offset": 256, 00:33:26.995 "data_size": 7936 00:33:26.995 }, 00:33:26.995 { 00:33:26.995 "name": "BaseBdev2", 00:33:26.995 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:26.995 "is_configured": true, 00:33:26.995 "data_offset": 256, 00:33:26.995 "data_size": 7936 00:33:26.995 } 00:33:26.995 ] 00:33:26.995 }' 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:33:26.995 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # local timeout=1646 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.995 11:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.259 11:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:27.259 "name": "raid_bdev1", 00:33:27.259 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:27.259 "strip_size_kb": 0, 00:33:27.259 "state": "online", 00:33:27.259 "raid_level": "raid1", 00:33:27.259 "superblock": true, 00:33:27.259 "num_base_bdevs": 2, 00:33:27.259 "num_base_bdevs_discovered": 2, 00:33:27.259 "num_base_bdevs_operational": 2, 00:33:27.259 "process": { 00:33:27.259 "type": "rebuild", 00:33:27.259 "target": "spare", 00:33:27.259 "progress": { 00:33:27.259 "blocks": 4096, 00:33:27.259 "percent": 51 00:33:27.259 } 00:33:27.259 }, 00:33:27.259 "base_bdevs_list": [ 00:33:27.259 { 00:33:27.259 "name": "spare", 00:33:27.259 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:27.259 "is_configured": true, 00:33:27.259 "data_offset": 256, 00:33:27.259 "data_size": 7936 00:33:27.259 }, 00:33:27.259 { 00:33:27.259 "name": "BaseBdev2", 00:33:27.259 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:27.259 "is_configured": true, 00:33:27.259 "data_offset": 256, 00:33:27.259 "data_size": 7936 00:33:27.259 } 00:33:27.259 ] 00:33:27.259 }' 00:33:27.260 11:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:27.519 11:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:27.519 11:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:27.519 11:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:27.519 11:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:33:28.455 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:28.455 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:28.456 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:28.456 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:28.456 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:28.456 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:28.456 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:28.456 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.714 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:28.714 "name": "raid_bdev1", 00:33:28.714 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:28.714 "strip_size_kb": 0, 00:33:28.714 "state": "online", 00:33:28.714 "raid_level": "raid1", 00:33:28.714 "superblock": true, 00:33:28.714 "num_base_bdevs": 2, 00:33:28.714 "num_base_bdevs_discovered": 2, 00:33:28.714 "num_base_bdevs_operational": 2, 00:33:28.714 "process": { 00:33:28.714 "type": "rebuild", 00:33:28.714 "target": "spare", 00:33:28.714 "progress": { 00:33:28.714 "blocks": 7424, 00:33:28.714 "percent": 93 00:33:28.714 } 00:33:28.714 }, 00:33:28.714 "base_bdevs_list": [ 00:33:28.714 { 00:33:28.714 "name": "spare", 00:33:28.714 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:28.714 "is_configured": true, 00:33:28.714 "data_offset": 256, 00:33:28.714 "data_size": 7936 00:33:28.714 }, 00:33:28.714 { 00:33:28.714 "name": "BaseBdev2", 00:33:28.714 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:28.714 "is_configured": true, 00:33:28.714 "data_offset": 256, 00:33:28.714 "data_size": 7936 00:33:28.714 } 00:33:28.714 ] 00:33:28.714 }' 00:33:28.714 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:28.714 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:28.714 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:28.714 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:28.714 11:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:33:28.714 [2024-07-25 11:44:44.588973] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:28.714 [2024-07-25 11:44:44.589099] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:28.714 [2024-07-25 11:44:44.589263] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:30.091 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:30.091 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:30.091 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:30.091 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:30.091 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:30.091 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:30.091 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.091 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.091 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:30.091 "name": "raid_bdev1", 00:33:30.091 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:30.091 "strip_size_kb": 0, 00:33:30.091 "state": "online", 00:33:30.091 "raid_level": "raid1", 00:33:30.091 "superblock": true, 00:33:30.091 "num_base_bdevs": 2, 00:33:30.091 "num_base_bdevs_discovered": 2, 00:33:30.091 "num_base_bdevs_operational": 2, 00:33:30.091 "base_bdevs_list": [ 00:33:30.091 { 00:33:30.091 "name": "spare", 00:33:30.091 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:30.091 "is_configured": true, 00:33:30.091 "data_offset": 256, 00:33:30.091 "data_size": 7936 00:33:30.091 }, 00:33:30.091 { 00:33:30.092 "name": "BaseBdev2", 00:33:30.092 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:30.092 "is_configured": true, 00:33:30.092 "data_offset": 256, 00:33:30.092 "data_size": 7936 00:33:30.092 } 00:33:30.092 ] 00:33:30.092 }' 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # break 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.092 11:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.351 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:30.351 "name": "raid_bdev1", 00:33:30.351 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:30.351 "strip_size_kb": 0, 00:33:30.351 "state": "online", 00:33:30.351 "raid_level": "raid1", 00:33:30.351 "superblock": true, 00:33:30.351 "num_base_bdevs": 2, 00:33:30.351 "num_base_bdevs_discovered": 2, 00:33:30.351 "num_base_bdevs_operational": 2, 00:33:30.351 "base_bdevs_list": [ 00:33:30.351 { 00:33:30.351 "name": "spare", 00:33:30.351 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:30.351 "is_configured": true, 00:33:30.351 "data_offset": 256, 00:33:30.351 "data_size": 7936 00:33:30.351 }, 00:33:30.351 { 00:33:30.351 "name": "BaseBdev2", 00:33:30.351 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:30.351 "is_configured": true, 00:33:30.351 "data_offset": 256, 00:33:30.351 "data_size": 7936 00:33:30.351 } 00:33:30.351 ] 00:33:30.351 }' 00:33:30.351 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.610 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.869 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:30.869 "name": "raid_bdev1", 00:33:30.869 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:30.869 "strip_size_kb": 0, 00:33:30.869 "state": "online", 00:33:30.869 "raid_level": "raid1", 00:33:30.869 "superblock": true, 00:33:30.869 "num_base_bdevs": 2, 00:33:30.869 "num_base_bdevs_discovered": 2, 00:33:30.869 "num_base_bdevs_operational": 2, 00:33:30.869 "base_bdevs_list": [ 00:33:30.869 { 00:33:30.869 "name": "spare", 00:33:30.869 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:30.869 "is_configured": true, 00:33:30.869 "data_offset": 256, 00:33:30.869 "data_size": 7936 00:33:30.869 }, 00:33:30.869 { 00:33:30.869 "name": "BaseBdev2", 00:33:30.869 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:30.869 "is_configured": true, 00:33:30.869 "data_offset": 256, 00:33:30.869 "data_size": 7936 00:33:30.869 } 00:33:30.869 ] 00:33:30.869 }' 00:33:30.869 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:30.869 11:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:31.807 11:44:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@734 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:31.807 [2024-07-25 11:44:47.593128] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:31.807 [2024-07-25 11:44:47.593161] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:31.807 [2024-07-25 11:44:47.593263] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:31.807 [2024-07-25 11:44:47.593367] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:31.807 [2024-07-25 11:44:47.593387] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:31.807 11:44:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.807 11:44:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # jq length 00:33:32.068 11:44:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:33:32.068 11:44:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@737 -- # '[' false = true ']' 00:33:32.068 11:44:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:33:32.068 11:44:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:32.327 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:32.586 [2024-07-25 11:44:48.365327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:32.586 [2024-07-25 11:44:48.365418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:32.586 [2024-07-25 11:44:48.365449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:32.586 [2024-07-25 11:44:48.365467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:32.586 [2024-07-25 11:44:48.367947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:32.586 [2024-07-25 11:44:48.367997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:32.586 [2024-07-25 11:44:48.368077] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:32.586 [2024-07-25 11:44:48.368149] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:32.586 [2024-07-25 11:44:48.368294] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:32.586 spare 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.586 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.845 [2024-07-25 11:44:48.468435] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:32.845 [2024-07-25 11:44:48.468525] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:32.845 [2024-07-25 11:44:48.468736] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:33:32.845 [2024-07-25 11:44:48.468883] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:32.845 [2024-07-25 11:44:48.468902] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:32.845 [2024-07-25 11:44:48.468999] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:32.845 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.845 "name": "raid_bdev1", 00:33:32.845 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:32.845 "strip_size_kb": 0, 00:33:32.845 "state": "online", 00:33:32.845 "raid_level": "raid1", 00:33:32.845 "superblock": true, 00:33:32.845 "num_base_bdevs": 2, 00:33:32.845 "num_base_bdevs_discovered": 2, 00:33:32.845 "num_base_bdevs_operational": 2, 00:33:32.845 "base_bdevs_list": [ 00:33:32.845 { 00:33:32.845 "name": "spare", 00:33:32.845 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:32.845 "is_configured": true, 00:33:32.845 "data_offset": 256, 00:33:32.845 "data_size": 7936 00:33:32.845 }, 00:33:32.845 { 00:33:32.845 "name": "BaseBdev2", 00:33:32.845 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:32.845 "is_configured": true, 00:33:32.845 "data_offset": 256, 00:33:32.845 "data_size": 7936 00:33:32.845 } 00:33:32.845 ] 00:33:32.845 }' 00:33:32.845 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.845 11:44:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:33.414 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:33.414 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:33.414 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:33.414 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:33.414 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:33.414 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.414 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.673 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:33.673 "name": "raid_bdev1", 00:33:33.673 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:33.673 "strip_size_kb": 0, 00:33:33.673 "state": "online", 00:33:33.673 "raid_level": "raid1", 00:33:33.673 "superblock": true, 00:33:33.673 "num_base_bdevs": 2, 00:33:33.673 "num_base_bdevs_discovered": 2, 00:33:33.673 "num_base_bdevs_operational": 2, 00:33:33.673 "base_bdevs_list": [ 00:33:33.673 { 00:33:33.673 "name": "spare", 00:33:33.673 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:33.673 "is_configured": true, 00:33:33.673 "data_offset": 256, 00:33:33.673 "data_size": 7936 00:33:33.673 }, 00:33:33.673 { 00:33:33.673 "name": "BaseBdev2", 00:33:33.673 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:33.673 "is_configured": true, 00:33:33.673 "data_offset": 256, 00:33:33.673 "data_size": 7936 00:33:33.673 } 00:33:33.673 ] 00:33:33.673 }' 00:33:33.673 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:33.673 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:33.673 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:33.932 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:33.932 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.932 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:34.191 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:33:34.191 11:44:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:34.451 [2024-07-25 11:44:50.074005] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.451 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.710 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:34.710 "name": "raid_bdev1", 00:33:34.710 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:34.710 "strip_size_kb": 0, 00:33:34.710 "state": "online", 00:33:34.710 "raid_level": "raid1", 00:33:34.710 "superblock": true, 00:33:34.710 "num_base_bdevs": 2, 00:33:34.710 "num_base_bdevs_discovered": 1, 00:33:34.710 "num_base_bdevs_operational": 1, 00:33:34.710 "base_bdevs_list": [ 00:33:34.710 { 00:33:34.710 "name": null, 00:33:34.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.710 "is_configured": false, 00:33:34.710 "data_offset": 256, 00:33:34.710 "data_size": 7936 00:33:34.710 }, 00:33:34.710 { 00:33:34.710 "name": "BaseBdev2", 00:33:34.710 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:34.710 "is_configured": true, 00:33:34.710 "data_offset": 256, 00:33:34.710 "data_size": 7936 00:33:34.710 } 00:33:34.710 ] 00:33:34.710 }' 00:33:34.710 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:34.710 11:44:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.279 11:44:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:35.538 [2024-07-25 11:44:51.346435] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:35.538 [2024-07-25 11:44:51.346721] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:35.538 [2024-07-25 11:44:51.346745] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:35.538 [2024-07-25 11:44:51.346856] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:35.538 [2024-07-25 11:44:51.361717] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:35.539 [2024-07-25 11:44:51.364147] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:35.539 11:44:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # sleep 1 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:36.919 "name": "raid_bdev1", 00:33:36.919 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:36.919 "strip_size_kb": 0, 00:33:36.919 "state": "online", 00:33:36.919 "raid_level": "raid1", 00:33:36.919 "superblock": true, 00:33:36.919 "num_base_bdevs": 2, 00:33:36.919 "num_base_bdevs_discovered": 2, 00:33:36.919 "num_base_bdevs_operational": 2, 00:33:36.919 "process": { 00:33:36.919 "type": "rebuild", 00:33:36.919 "target": "spare", 00:33:36.919 "progress": { 00:33:36.919 "blocks": 3072, 00:33:36.919 "percent": 38 00:33:36.919 } 00:33:36.919 }, 00:33:36.919 "base_bdevs_list": [ 00:33:36.919 { 00:33:36.919 "name": "spare", 00:33:36.919 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:36.919 "is_configured": true, 00:33:36.919 "data_offset": 256, 00:33:36.919 "data_size": 7936 00:33:36.919 }, 00:33:36.919 { 00:33:36.919 "name": "BaseBdev2", 00:33:36.919 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:36.919 "is_configured": true, 00:33:36.919 "data_offset": 256, 00:33:36.919 "data_size": 7936 00:33:36.919 } 00:33:36.919 ] 00:33:36.919 }' 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:36.919 11:44:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:37.178 [2024-07-25 11:44:52.937916] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:37.178 [2024-07-25 11:44:52.976441] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:37.178 [2024-07-25 11:44:52.976567] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:37.178 [2024-07-25 11:44:52.976600] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:37.178 [2024-07-25 11:44:52.976613] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.178 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.745 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:37.745 "name": "raid_bdev1", 00:33:37.745 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:37.745 "strip_size_kb": 0, 00:33:37.745 "state": "online", 00:33:37.745 "raid_level": "raid1", 00:33:37.745 "superblock": true, 00:33:37.745 "num_base_bdevs": 2, 00:33:37.745 "num_base_bdevs_discovered": 1, 00:33:37.745 "num_base_bdevs_operational": 1, 00:33:37.745 "base_bdevs_list": [ 00:33:37.745 { 00:33:37.745 "name": null, 00:33:37.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.745 "is_configured": false, 00:33:37.745 "data_offset": 256, 00:33:37.745 "data_size": 7936 00:33:37.745 }, 00:33:37.745 { 00:33:37.745 "name": "BaseBdev2", 00:33:37.745 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:37.745 "is_configured": true, 00:33:37.745 "data_offset": 256, 00:33:37.745 "data_size": 7936 00:33:37.745 } 00:33:37.745 ] 00:33:37.745 }' 00:33:37.745 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:37.745 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.311 11:44:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:38.311 [2024-07-25 11:44:54.179162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:38.311 [2024-07-25 11:44:54.179269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:38.311 [2024-07-25 11:44:54.179316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:38.311 [2024-07-25 11:44:54.179331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:38.311 [2024-07-25 11:44:54.179584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:38.311 [2024-07-25 11:44:54.179608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:38.311 [2024-07-25 11:44:54.179720] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:38.311 [2024-07-25 11:44:54.179741] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:38.311 [2024-07-25 11:44:54.179757] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:38.311 [2024-07-25 11:44:54.179790] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:38.569 [2024-07-25 11:44:54.194422] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:38.569 spare 00:33:38.569 [2024-07-25 11:44:54.197012] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:38.569 11:44:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:33:39.506 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:39.506 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:39.506 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:39.506 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:39.506 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:39.506 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.506 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.765 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:39.765 "name": "raid_bdev1", 00:33:39.765 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:39.765 "strip_size_kb": 0, 00:33:39.765 "state": "online", 00:33:39.765 "raid_level": "raid1", 00:33:39.765 "superblock": true, 00:33:39.765 "num_base_bdevs": 2, 00:33:39.765 "num_base_bdevs_discovered": 2, 00:33:39.765 "num_base_bdevs_operational": 2, 00:33:39.765 "process": { 00:33:39.765 "type": "rebuild", 00:33:39.765 "target": "spare", 00:33:39.765 "progress": { 00:33:39.765 "blocks": 3072, 00:33:39.765 "percent": 38 00:33:39.765 } 00:33:39.765 }, 00:33:39.765 "base_bdevs_list": [ 00:33:39.765 { 00:33:39.765 "name": "spare", 00:33:39.765 "uuid": "e85722f0-7286-5bae-a058-26962adc918a", 00:33:39.765 "is_configured": true, 00:33:39.765 "data_offset": 256, 00:33:39.765 "data_size": 7936 00:33:39.765 }, 00:33:39.765 { 00:33:39.765 "name": "BaseBdev2", 00:33:39.765 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:39.765 "is_configured": true, 00:33:39.765 "data_offset": 256, 00:33:39.765 "data_size": 7936 00:33:39.765 } 00:33:39.765 ] 00:33:39.765 }' 00:33:39.765 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:39.765 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:39.765 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:39.765 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:39.765 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:40.024 [2024-07-25 11:44:55.802721] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:40.024 [2024-07-25 11:44:55.809329] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:40.024 [2024-07-25 11:44:55.809441] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:40.024 [2024-07-25 11:44:55.809465] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:40.024 [2024-07-25 11:44:55.809479] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.024 11:44:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.282 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:40.282 "name": "raid_bdev1", 00:33:40.282 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:40.282 "strip_size_kb": 0, 00:33:40.282 "state": "online", 00:33:40.282 "raid_level": "raid1", 00:33:40.282 "superblock": true, 00:33:40.282 "num_base_bdevs": 2, 00:33:40.282 "num_base_bdevs_discovered": 1, 00:33:40.282 "num_base_bdevs_operational": 1, 00:33:40.282 "base_bdevs_list": [ 00:33:40.282 { 00:33:40.282 "name": null, 00:33:40.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.282 "is_configured": false, 00:33:40.282 "data_offset": 256, 00:33:40.282 "data_size": 7936 00:33:40.282 }, 00:33:40.282 { 00:33:40.282 "name": "BaseBdev2", 00:33:40.282 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:40.282 "is_configured": true, 00:33:40.282 "data_offset": 256, 00:33:40.282 "data_size": 7936 00:33:40.282 } 00:33:40.282 ] 00:33:40.282 }' 00:33:40.282 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:40.282 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.220 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:41.220 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:41.220 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:41.220 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:41.220 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:41.220 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.220 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:41.220 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:41.220 "name": "raid_bdev1", 00:33:41.220 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:41.220 "strip_size_kb": 0, 00:33:41.220 "state": "online", 00:33:41.220 "raid_level": "raid1", 00:33:41.220 "superblock": true, 00:33:41.220 "num_base_bdevs": 2, 00:33:41.220 "num_base_bdevs_discovered": 1, 00:33:41.220 "num_base_bdevs_operational": 1, 00:33:41.220 "base_bdevs_list": [ 00:33:41.220 { 00:33:41.220 "name": null, 00:33:41.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.220 "is_configured": false, 00:33:41.220 "data_offset": 256, 00:33:41.220 "data_size": 7936 00:33:41.220 }, 00:33:41.220 { 00:33:41.220 "name": "BaseBdev2", 00:33:41.220 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:41.220 "is_configured": true, 00:33:41.220 "data_offset": 256, 00:33:41.220 "data_size": 7936 00:33:41.220 } 00:33:41.220 ] 00:33:41.220 }' 00:33:41.220 11:44:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:41.220 11:44:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:41.220 11:44:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:41.220 11:44:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:41.220 11:44:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:41.789 11:44:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:41.789 [2024-07-25 11:44:57.587287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:41.789 [2024-07-25 11:44:57.587387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:41.789 [2024-07-25 11:44:57.587437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:41.789 [2024-07-25 11:44:57.587454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:41.789 [2024-07-25 11:44:57.587699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:41.789 [2024-07-25 11:44:57.587728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:41.789 [2024-07-25 11:44:57.587802] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:41.789 [2024-07-25 11:44:57.587827] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:41.789 [2024-07-25 11:44:57.587839] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:41.789 BaseBdev1 00:33:41.789 11:44:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # sleep 1 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:43.163 "name": "raid_bdev1", 00:33:43.163 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:43.163 "strip_size_kb": 0, 00:33:43.163 "state": "online", 00:33:43.163 "raid_level": "raid1", 00:33:43.163 "superblock": true, 00:33:43.163 "num_base_bdevs": 2, 00:33:43.163 "num_base_bdevs_discovered": 1, 00:33:43.163 "num_base_bdevs_operational": 1, 00:33:43.163 "base_bdevs_list": [ 00:33:43.163 { 00:33:43.163 "name": null, 00:33:43.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.163 "is_configured": false, 00:33:43.163 "data_offset": 256, 00:33:43.163 "data_size": 7936 00:33:43.163 }, 00:33:43.163 { 00:33:43.163 "name": "BaseBdev2", 00:33:43.163 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:43.163 "is_configured": true, 00:33:43.163 "data_offset": 256, 00:33:43.163 "data_size": 7936 00:33:43.163 } 00:33:43.163 ] 00:33:43.163 }' 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:43.163 11:44:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:43.741 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:43.741 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:43.741 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:43.741 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:43.741 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:43.741 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.741 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.002 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:44.002 "name": "raid_bdev1", 00:33:44.002 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:44.002 "strip_size_kb": 0, 00:33:44.002 "state": "online", 00:33:44.002 "raid_level": "raid1", 00:33:44.002 "superblock": true, 00:33:44.002 "num_base_bdevs": 2, 00:33:44.002 "num_base_bdevs_discovered": 1, 00:33:44.002 "num_base_bdevs_operational": 1, 00:33:44.002 "base_bdevs_list": [ 00:33:44.002 { 00:33:44.002 "name": null, 00:33:44.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.002 "is_configured": false, 00:33:44.002 "data_offset": 256, 00:33:44.002 "data_size": 7936 00:33:44.002 }, 00:33:44.002 { 00:33:44.002 "name": "BaseBdev2", 00:33:44.002 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:44.002 "is_configured": true, 00:33:44.002 "data_offset": 256, 00:33:44.002 "data_size": 7936 00:33:44.002 } 00:33:44.002 ] 00:33:44.002 }' 00:33:44.002 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:44.002 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:44.002 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:44.261 11:44:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:44.520 [2024-07-25 11:45:00.176096] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:44.520 [2024-07-25 11:45:00.176292] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:44.520 [2024-07-25 11:45:00.176317] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:44.520 request: 00:33:44.520 { 00:33:44.520 "base_bdev": "BaseBdev1", 00:33:44.520 "raid_bdev": "raid_bdev1", 00:33:44.520 "method": "bdev_raid_add_base_bdev", 00:33:44.520 "req_id": 1 00:33:44.520 } 00:33:44.520 Got JSON-RPC error response 00:33:44.520 response: 00:33:44.520 { 00:33:44.520 "code": -22, 00:33:44.520 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:44.520 } 00:33:44.520 11:45:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:33:44.520 11:45:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:44.520 11:45:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:44.520 11:45:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:44.520 11:45:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@793 -- # sleep 1 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.507 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.771 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:45.771 "name": "raid_bdev1", 00:33:45.771 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:45.771 "strip_size_kb": 0, 00:33:45.771 "state": "online", 00:33:45.771 "raid_level": "raid1", 00:33:45.771 "superblock": true, 00:33:45.771 "num_base_bdevs": 2, 00:33:45.771 "num_base_bdevs_discovered": 1, 00:33:45.771 "num_base_bdevs_operational": 1, 00:33:45.771 "base_bdevs_list": [ 00:33:45.771 { 00:33:45.771 "name": null, 00:33:45.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.771 "is_configured": false, 00:33:45.771 "data_offset": 256, 00:33:45.771 "data_size": 7936 00:33:45.771 }, 00:33:45.771 { 00:33:45.771 "name": "BaseBdev2", 00:33:45.771 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:45.771 "is_configured": true, 00:33:45.771 "data_offset": 256, 00:33:45.771 "data_size": 7936 00:33:45.771 } 00:33:45.771 ] 00:33:45.771 }' 00:33:45.771 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:45.771 11:45:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:46.339 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:46.339 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:46.339 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:46.339 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:46.339 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:46.339 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:46.339 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:46.598 "name": "raid_bdev1", 00:33:46.598 "uuid": "988d169f-55ae-49a7-8e0f-8943833be6ac", 00:33:46.598 "strip_size_kb": 0, 00:33:46.598 "state": "online", 00:33:46.598 "raid_level": "raid1", 00:33:46.598 "superblock": true, 00:33:46.598 "num_base_bdevs": 2, 00:33:46.598 "num_base_bdevs_discovered": 1, 00:33:46.598 "num_base_bdevs_operational": 1, 00:33:46.598 "base_bdevs_list": [ 00:33:46.598 { 00:33:46.598 "name": null, 00:33:46.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.598 "is_configured": false, 00:33:46.598 "data_offset": 256, 00:33:46.598 "data_size": 7936 00:33:46.598 }, 00:33:46.598 { 00:33:46.598 "name": "BaseBdev2", 00:33:46.598 "uuid": "9841d17e-b413-50f4-b5aa-ea66be412376", 00:33:46.598 "is_configured": true, 00:33:46.598 "data_offset": 256, 00:33:46.598 "data_size": 7936 00:33:46.598 } 00:33:46.598 ] 00:33:46.598 }' 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@798 -- # killprocess 104293 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 104293 ']' 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 104293 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 104293 00:33:46.598 killing process with pid 104293 00:33:46.598 Received shutdown signal, test time was about 60.000000 seconds 00:33:46.598 00:33:46.598 Latency(us) 00:33:46.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.598 =================================================================================================================== 00:33:46.598 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 104293' 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 104293 00:33:46.598 11:45:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 104293 00:33:46.598 [2024-07-25 11:45:02.451173] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:46.598 [2024-07-25 11:45:02.451340] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:46.598 [2024-07-25 11:45:02.451405] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:46.598 [2024-07-25 11:45:02.451423] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:46.857 [2024-07-25 11:45:02.723953] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:48.235 11:45:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@800 -- # return 0 00:33:48.235 00:33:48.235 real 0m31.934s 00:33:48.235 user 0m51.293s 00:33:48.235 sys 0m3.267s 00:33:48.235 11:45:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:48.235 11:45:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.235 ************************************ 00:33:48.235 END TEST raid_rebuild_test_sb_md_interleaved 00:33:48.235 ************************************ 00:33:48.235 11:45:03 bdev_raid -- bdev/bdev_raid.sh@994 -- # trap - EXIT 00:33:48.235 11:45:03 bdev_raid -- bdev/bdev_raid.sh@995 -- # cleanup 00:33:48.235 11:45:03 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 104293 ']' 00:33:48.235 11:45:03 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 104293 00:33:48.235 11:45:03 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:33:48.235 ************************************ 00:33:48.235 END TEST bdev_raid 00:33:48.235 ************************************ 00:33:48.235 00:33:48.235 real 27m17.266s 00:33:48.235 user 46m1.486s 00:33:48.235 sys 3m31.216s 00:33:48.235 11:45:03 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:48.235 11:45:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:48.235 11:45:03 -- spdk/autotest.sh@203 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:48.235 11:45:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:48.235 11:45:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:48.235 11:45:03 -- common/autotest_common.sh@10 -- # set +x 00:33:48.235 ************************************ 00:33:48.235 START TEST spdkcli_raid 00:33:48.235 ************************************ 00:33:48.235 11:45:03 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:48.235 * Looking for test storage... 00:33:48.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:33:48.235 11:45:04 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:33:48.235 11:45:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:48.235 11:45:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=105109 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:33:48.235 11:45:04 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 105109 00:33:48.235 11:45:04 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 105109 ']' 00:33:48.235 11:45:04 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.235 11:45:04 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:48.235 11:45:04 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.235 11:45:04 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:48.235 11:45:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:48.494 [2024-07-25 11:45:04.203789] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:33:48.494 [2024-07-25 11:45:04.204127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105109 ] 00:33:48.752 [2024-07-25 11:45:04.378178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:48.752 [2024-07-25 11:45:04.606707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.752 [2024-07-25 11:45:04.606718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.689 11:45:05 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:49.689 11:45:05 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:33:49.689 11:45:05 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:33:49.689 11:45:05 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.689 11:45:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:49.689 11:45:05 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:33:49.689 11:45:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:49.689 11:45:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:49.689 11:45:05 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:49.689 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:49.689 ' 00:33:51.067 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:33:51.067 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:33:51.325 11:45:07 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:33:51.325 11:45:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:51.325 11:45:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:51.325 11:45:07 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:33:51.325 11:45:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:51.325 11:45:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:51.325 11:45:07 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:33:51.325 ' 00:33:52.261 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:33:52.520 11:45:08 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:33:52.520 11:45:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:52.520 11:45:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:52.520 11:45:08 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:33:52.520 11:45:08 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:52.520 11:45:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:52.520 11:45:08 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:33:52.520 11:45:08 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:33:53.087 11:45:08 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:33:53.087 11:45:08 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:33:53.087 11:45:08 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:33:53.087 11:45:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:53.087 11:45:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:53.087 11:45:08 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:33:53.087 11:45:08 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:53.087 11:45:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:53.087 11:45:08 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:33:53.087 ' 00:33:54.023 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:33:54.280 11:45:09 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:33:54.281 11:45:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:54.281 11:45:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:54.281 11:45:10 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:33:54.281 11:45:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:54.281 11:45:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:54.281 11:45:10 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:33:54.281 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:33:54.281 ' 00:33:55.652 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:33:55.652 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:33:55.652 11:45:11 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:33:55.652 11:45:11 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:55.652 11:45:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:55.652 11:45:11 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 105109 00:33:55.652 11:45:11 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 105109 ']' 00:33:55.652 11:45:11 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 105109 00:33:55.652 11:45:11 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:33:55.652 11:45:11 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:55.652 11:45:11 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105109 00:33:55.911 killing process with pid 105109 00:33:55.911 11:45:11 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:55.911 11:45:11 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:55.911 11:45:11 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105109' 00:33:55.911 11:45:11 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 105109 00:33:55.911 11:45:11 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 105109 00:33:58.441 11:45:13 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:33:58.441 11:45:13 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 105109 ']' 00:33:58.441 11:45:13 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 105109 00:33:58.441 11:45:13 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 105109 ']' 00:33:58.441 11:45:13 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 105109 00:33:58.441 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (105109) - No such process 00:33:58.441 Process with pid 105109 is not found 00:33:58.441 11:45:13 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 105109 is not found' 00:33:58.441 11:45:13 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:33:58.441 11:45:13 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:58.441 11:45:13 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:58.441 11:45:13 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:58.441 ************************************ 00:33:58.441 END TEST spdkcli_raid 00:33:58.441 ************************************ 00:33:58.441 00:33:58.441 real 0m9.807s 00:33:58.441 user 0m19.924s 00:33:58.441 sys 0m1.053s 00:33:58.441 11:45:13 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:58.441 11:45:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:58.441 11:45:13 -- spdk/autotest.sh@204 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:33:58.441 11:45:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:58.441 11:45:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:58.441 11:45:13 -- common/autotest_common.sh@10 -- # set +x 00:33:58.441 ************************************ 00:33:58.441 START TEST blockdev_raid5f 00:33:58.441 ************************************ 00:33:58.441 11:45:13 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:33:58.441 * Looking for test storage... 00:33:58.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=105361 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 105361 00:33:58.441 11:45:13 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 105361 ']' 00:33:58.441 11:45:13 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:58.441 11:45:13 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.441 11:45:13 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:58.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.441 11:45:13 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.441 11:45:13 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:58.441 11:45:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:58.441 [2024-07-25 11:45:14.053750] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:33:58.441 [2024-07-25 11:45:14.053923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105361 ] 00:33:58.441 [2024-07-25 11:45:14.233061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.700 [2024-07-25 11:45:14.487959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.635 11:45:15 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:59.635 11:45:15 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:33:59.635 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:33:59.635 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:33:59.635 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:33:59.635 11:45:15 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:59.636 Malloc0 00:33:59.636 Malloc1 00:33:59.636 Malloc2 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.636 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.636 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:33:59.636 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.636 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.636 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.636 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:33:59.636 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.636 11:45:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:59.636 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.923 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:33:59.923 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:33:59.923 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "02e9a97f-e113-4f5c-90e5-71e7462f1654"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "02e9a97f-e113-4f5c-90e5-71e7462f1654",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "02e9a97f-e113-4f5c-90e5-71e7462f1654",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9ae875c0-386c-4a72-ac06-e907496e79fd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "cf17500c-bcd0-4717-bfd1-8c1df0a6c0c3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "39b291c1-b737-4d3b-9bd2-024a319c706c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:33:59.923 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:33:59.923 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:33:59.923 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:33:59.923 11:45:15 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 105361 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 105361 ']' 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 105361 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105361 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:59.923 killing process with pid 105361 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105361' 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 105361 00:33:59.923 11:45:15 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 105361 00:34:02.456 11:45:18 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:02.456 11:45:18 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:34:02.456 11:45:18 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:34:02.456 11:45:18 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:02.456 11:45:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:02.456 ************************************ 00:34:02.456 START TEST bdev_hello_world 00:34:02.456 ************************************ 00:34:02.457 11:45:18 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:34:02.457 [2024-07-25 11:45:18.205803] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:34:02.457 [2024-07-25 11:45:18.205971] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105423 ] 00:34:02.715 [2024-07-25 11:45:18.370288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.974 [2024-07-25 11:45:18.606440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.542 [2024-07-25 11:45:19.128221] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:03.542 [2024-07-25 11:45:19.128287] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:34:03.542 [2024-07-25 11:45:19.128321] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:03.542 [2024-07-25 11:45:19.129062] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:03.542 [2024-07-25 11:45:19.129241] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:03.542 [2024-07-25 11:45:19.129277] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:03.542 [2024-07-25 11:45:19.129352] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:03.542 00:34:03.542 [2024-07-25 11:45:19.129387] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:04.919 00:34:04.919 real 0m2.439s 00:34:04.919 user 0m2.007s 00:34:04.919 sys 0m0.309s 00:34:04.919 11:45:20 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:04.919 ************************************ 00:34:04.919 11:45:20 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:34:04.919 END TEST bdev_hello_world 00:34:04.919 ************************************ 00:34:04.919 11:45:20 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:34:04.919 11:45:20 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:04.919 11:45:20 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:04.919 11:45:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:04.919 ************************************ 00:34:04.919 START TEST bdev_bounds 00:34:04.919 ************************************ 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:34:04.920 Process bdevio pid: 105471 00:34:04.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=105471 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 105471' 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 105471 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 105471 ']' 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:04.920 11:45:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:04.920 [2024-07-25 11:45:20.719317] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:34:04.920 [2024-07-25 11:45:20.719511] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105471 ] 00:34:05.178 [2024-07-25 11:45:20.899221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:05.438 [2024-07-25 11:45:21.145965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.438 [2024-07-25 11:45:21.146044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.438 [2024-07-25 11:45:21.146056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.004 11:45:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:06.004 11:45:21 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:34:06.004 11:45:21 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:06.004 I/O targets: 00:34:06.004 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:34:06.004 00:34:06.004 00:34:06.004 CUnit - A unit testing framework for C - Version 2.1-3 00:34:06.004 http://cunit.sourceforge.net/ 00:34:06.004 00:34:06.004 00:34:06.004 Suite: bdevio tests on: raid5f 00:34:06.004 Test: blockdev write read block ...passed 00:34:06.004 Test: blockdev write zeroes read block ...passed 00:34:06.004 Test: blockdev write zeroes read no split ...passed 00:34:06.263 Test: blockdev write zeroes read split ...passed 00:34:06.263 Test: blockdev write zeroes read split partial ...passed 00:34:06.263 Test: blockdev reset ...passed 00:34:06.263 Test: blockdev write read 8 blocks ...passed 00:34:06.263 Test: blockdev write read size > 128k ...passed 00:34:06.263 Test: blockdev write read invalid size ...passed 00:34:06.263 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:06.263 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:06.263 Test: blockdev write read max offset ...passed 00:34:06.263 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:06.263 Test: blockdev writev readv 8 blocks ...passed 00:34:06.263 Test: blockdev writev readv 30 x 1block ...passed 00:34:06.263 Test: blockdev writev readv block ...passed 00:34:06.263 Test: blockdev writev readv size > 128k ...passed 00:34:06.263 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:06.263 Test: blockdev comparev and writev ...passed 00:34:06.263 Test: blockdev nvme passthru rw ...passed 00:34:06.263 Test: blockdev nvme passthru vendor specific ...passed 00:34:06.263 Test: blockdev nvme admin passthru ...passed 00:34:06.263 Test: blockdev copy ...passed 00:34:06.263 00:34:06.263 Run Summary: Type Total Ran Passed Failed Inactive 00:34:06.263 suites 1 1 n/a 0 0 00:34:06.263 tests 23 23 23 0 0 00:34:06.263 asserts 130 130 130 0 n/a 00:34:06.263 00:34:06.263 Elapsed time = 0.560 seconds 00:34:06.263 0 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 105471 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 105471 ']' 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 105471 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105471 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105471' 00:34:06.263 killing process with pid 105471 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 105471 00:34:06.263 11:45:22 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 105471 00:34:08.168 11:45:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:34:08.168 00:34:08.168 real 0m2.943s 00:34:08.168 user 0m6.802s 00:34:08.168 sys 0m0.462s 00:34:08.168 ************************************ 00:34:08.168 END TEST bdev_bounds 00:34:08.168 ************************************ 00:34:08.168 11:45:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:08.168 11:45:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:08.168 11:45:23 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:34:08.168 11:45:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:34:08.168 11:45:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:08.168 11:45:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:08.168 ************************************ 00:34:08.168 START TEST bdev_nbd 00:34:08.168 ************************************ 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=105531 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 105531 /var/tmp/spdk-nbd.sock 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 105531 ']' 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:08.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:08.168 11:45:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:08.168 [2024-07-25 11:45:23.744229] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:34:08.168 [2024-07-25 11:45:23.744413] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.168 [2024-07-25 11:45:23.919469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.427 [2024-07-25 11:45:24.165360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:34:08.993 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:09.251 1+0 records in 00:34:09.251 1+0 records out 00:34:09.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461915 s, 8.9 MB/s 00:34:09.251 11:45:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:09.251 11:45:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:34:09.251 11:45:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:09.252 11:45:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:09.252 11:45:25 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:34:09.252 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:09.252 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:34:09.252 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:09.510 { 00:34:09.510 "nbd_device": "/dev/nbd0", 00:34:09.510 "bdev_name": "raid5f" 00:34:09.510 } 00:34:09.510 ]' 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:09.510 { 00:34:09.510 "nbd_device": "/dev/nbd0", 00:34:09.510 "bdev_name": "raid5f" 00:34:09.510 } 00:34:09.510 ]' 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:09.510 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:09.768 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:10.027 11:45:25 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:34:10.286 /dev/nbd0 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:10.286 1+0 records in 00:34:10.286 1+0 records out 00:34:10.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459084 s, 8.9 MB/s 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:10.286 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:10.545 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:10.545 { 00:34:10.545 "nbd_device": "/dev/nbd0", 00:34:10.545 "bdev_name": "raid5f" 00:34:10.545 } 00:34:10.545 ]' 00:34:10.545 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:10.545 { 00:34:10.545 "nbd_device": "/dev/nbd0", 00:34:10.545 "bdev_name": "raid5f" 00:34:10.545 } 00:34:10.545 ]' 00:34:10.545 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:10.803 256+0 records in 00:34:10.803 256+0 records out 00:34:10.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108227 s, 96.9 MB/s 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:10.803 256+0 records in 00:34:10.803 256+0 records out 00:34:10.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0409879 s, 25.6 MB/s 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:34:10.803 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:10.804 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:11.063 11:45:26 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:34:11.322 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:11.592 malloc_lvol_verify 00:34:11.592 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:11.864 dc70cc66-50b3-4cf3-8206-51dd45379aca 00:34:11.864 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:12.122 6a99b515-3234-4d77-87a3-74fbfe5f05c1 00:34:12.122 11:45:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:12.380 /dev/nbd0 00:34:12.380 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:34:12.380 mke2fs 1.46.5 (30-Dec-2021) 00:34:12.380 Discarding device blocks: 0/4096 done 00:34:12.380 Creating filesystem with 4096 1k blocks and 1024 inodes 00:34:12.380 00:34:12.380 Allocating group tables: 0/1 done 00:34:12.380 Writing inode tables: 0/1 done 00:34:12.380 Creating journal (1024 blocks): done 00:34:12.380 Writing superblocks and filesystem accounting information: 0/1 done 00:34:12.380 00:34:12.380 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:34:12.380 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:12.380 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.380 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:12.380 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:12.380 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:12.380 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:12.380 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 105531 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 105531 ']' 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 105531 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 105531 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:12.638 killing process with pid 105531 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 105531' 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 105531 00:34:12.638 11:45:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 105531 00:34:14.540 11:45:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:34:14.540 00:34:14.540 real 0m6.334s 00:34:14.540 user 0m8.811s 00:34:14.540 sys 0m1.353s 00:34:14.540 11:45:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:14.540 11:45:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:14.540 ************************************ 00:34:14.540 END TEST bdev_nbd 00:34:14.540 ************************************ 00:34:14.540 11:45:29 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:34:14.540 11:45:29 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:34:14.540 11:45:29 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:34:14.540 11:45:29 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:34:14.540 11:45:29 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:14.540 11:45:29 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:14.540 11:45:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:14.540 ************************************ 00:34:14.540 START TEST bdev_fio 00:34:14.540 ************************************ 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:34:14.540 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:14.540 ************************************ 00:34:14.540 START TEST bdev_fio_rw_verify 00:34:14.540 ************************************ 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:14.540 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:14.541 11:45:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:14.541 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:14.541 fio-3.35 00:34:14.541 Starting 1 thread 00:34:26.755 00:34:26.755 job_raid5f: (groupid=0, jobs=1): err= 0: pid=105728: Thu Jul 25 11:45:41 2024 00:34:26.755 read: IOPS=8569, BW=33.5MiB/s (35.1MB/s)(335MiB/10000msec) 00:34:26.755 slat (usec): min=23, max=152, avg=29.02, stdev= 5.37 00:34:26.755 clat (usec): min=12, max=506, avg=185.39, stdev=69.87 00:34:26.755 lat (usec): min=40, max=554, avg=214.40, stdev=70.50 00:34:26.755 clat percentiles (usec): 00:34:26.755 | 50.000th=[ 186], 99.000th=[ 322], 99.900th=[ 371], 99.990th=[ 412], 00:34:26.755 | 99.999th=[ 506] 00:34:26.755 write: IOPS=9036, BW=35.3MiB/s (37.0MB/s)(349MiB/9873msec); 0 zone resets 00:34:26.755 slat (usec): min=12, max=280, avg=23.34, stdev= 5.75 00:34:26.755 clat (usec): min=82, max=1240, avg=424.24, stdev=55.88 00:34:26.755 lat (usec): min=104, max=1521, avg=447.58, stdev=57.13 00:34:26.755 clat percentiles (usec): 00:34:26.755 | 50.000th=[ 429], 99.000th=[ 537], 99.900th=[ 619], 99.990th=[ 947], 00:34:26.755 | 99.999th=[ 1237] 00:34:26.755 bw ( KiB/s): min=33696, max=38656, per=98.41%, avg=35573.47, stdev=1560.31, samples=19 00:34:26.755 iops : min= 8424, max= 9664, avg=8893.37, stdev=390.08, samples=19 00:34:26.755 lat (usec) : 20=0.01%, 100=6.29%, 250=31.68%, 500=58.85%, 750=3.17% 00:34:26.755 lat (usec) : 1000=0.01% 00:34:26.755 lat (msec) : 2=0.01% 00:34:26.755 cpu : usr=98.74%, sys=0.49%, ctx=26, majf=0, minf=7462 00:34:26.755 IO depths : 1=7.7%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:26.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.755 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:26.755 issued rwts: total=85694,89219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:26.755 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:26.755 00:34:26.755 Run status group 0 (all jobs): 00:34:26.755 READ: bw=33.5MiB/s (35.1MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=335MiB (351MB), run=10000-10000msec 00:34:26.755 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=349MiB (365MB), run=9873-9873msec 00:34:27.014 ----------------------------------------------------- 00:34:27.014 Suppressions used: 00:34:27.014 count bytes template 00:34:27.014 1 7 /usr/src/fio/parse.c 00:34:27.014 854 81984 /usr/src/fio/iolog.c 00:34:27.014 1 8 libtcmalloc_minimal.so 00:34:27.014 1 904 libcrypto.so 00:34:27.014 ----------------------------------------------------- 00:34:27.014 00:34:27.014 00:34:27.014 real 0m12.727s 00:34:27.014 user 0m13.050s 00:34:27.014 sys 0m0.846s 00:34:27.014 11:45:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:27.014 11:45:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:34:27.014 ************************************ 00:34:27.014 END TEST bdev_fio_rw_verify 00:34:27.015 ************************************ 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:27.015 11:45:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "02e9a97f-e113-4f5c-90e5-71e7462f1654"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "02e9a97f-e113-4f5c-90e5-71e7462f1654",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "02e9a97f-e113-4f5c-90e5-71e7462f1654",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9ae875c0-386c-4a72-ac06-e907496e79fd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "cf17500c-bcd0-4717-bfd1-8c1df0a6c0c3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "39b291c1-b737-4d3b-9bd2-024a319c706c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:34:27.274 11:45:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:34:27.274 11:45:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:27.274 /home/vagrant/spdk_repo/spdk 00:34:27.274 11:45:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:34:27.274 11:45:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:34:27.274 11:45:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:34:27.274 00:34:27.274 real 0m12.928s 00:34:27.274 user 0m13.139s 00:34:27.274 sys 0m0.937s 00:34:27.274 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:27.274 11:45:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:27.274 ************************************ 00:34:27.274 END TEST bdev_fio 00:34:27.274 ************************************ 00:34:27.274 11:45:42 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:27.274 11:45:42 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:27.274 11:45:42 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:34:27.274 11:45:42 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:27.274 11:45:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:27.274 ************************************ 00:34:27.274 START TEST bdev_verify 00:34:27.274 ************************************ 00:34:27.274 11:45:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:27.274 [2024-07-25 11:45:43.068096] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:34:27.274 [2024-07-25 11:45:43.068275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105886 ] 00:34:27.533 [2024-07-25 11:45:43.232286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:27.791 [2024-07-25 11:45:43.465139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.791 [2024-07-25 11:45:43.465149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.359 Running I/O for 5 seconds... 00:34:33.629 00:34:33.629 Latency(us) 00:34:33.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.629 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:33.629 Verification LBA range: start 0x0 length 0x2000 00:34:33.629 raid5f : 5.01 6154.51 24.04 0.00 0.00 31398.76 435.67 35985.22 00:34:33.629 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:33.629 Verification LBA range: start 0x2000 length 0x2000 00:34:33.629 raid5f : 5.02 6243.14 24.39 0.00 0.00 30858.26 279.27 30146.56 00:34:33.629 =================================================================================================================== 00:34:33.629 Total : 12397.65 48.43 0.00 0.00 31126.55 279.27 35985.22 00:34:34.564 00:34:34.564 real 0m7.458s 00:34:34.564 user 0m13.508s 00:34:34.564 sys 0m0.297s 00:34:34.564 11:45:50 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:34.564 11:45:50 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:34.564 ************************************ 00:34:34.564 END TEST bdev_verify 00:34:34.564 ************************************ 00:34:34.822 11:45:50 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:34.822 11:45:50 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:34:34.822 11:45:50 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:34.822 11:45:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:34.822 ************************************ 00:34:34.822 START TEST bdev_verify_big_io 00:34:34.822 ************************************ 00:34:34.822 11:45:50 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:34.822 [2024-07-25 11:45:50.595178] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:34:34.822 [2024-07-25 11:45:50.595390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105974 ] 00:34:35.110 [2024-07-25 11:45:50.767270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:35.368 [2024-07-25 11:45:51.002808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.368 [2024-07-25 11:45:51.002823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.934 Running I/O for 5 seconds... 00:34:41.199 00:34:41.199 Latency(us) 00:34:41.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.199 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:41.199 Verification LBA range: start 0x0 length 0x200 00:34:41.199 raid5f : 5.22 340.53 21.28 0.00 0.00 9353834.58 171.29 533820.51 00:34:41.199 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:41.199 Verification LBA range: start 0x200 length 0x200 00:34:41.199 raid5f : 5.22 340.77 21.30 0.00 0.00 9418830.67 251.35 556698.53 00:34:41.199 =================================================================================================================== 00:34:41.199 Total : 681.30 42.58 0.00 0.00 9386332.62 171.29 556698.53 00:34:42.576 00:34:42.576 real 0m7.720s 00:34:42.576 user 0m13.979s 00:34:42.576 sys 0m0.328s 00:34:42.576 11:45:58 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:42.576 11:45:58 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:42.576 ************************************ 00:34:42.576 END TEST bdev_verify_big_io 00:34:42.576 ************************************ 00:34:42.576 11:45:58 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:42.576 11:45:58 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:34:42.576 11:45:58 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:42.576 11:45:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:42.576 ************************************ 00:34:42.576 START TEST bdev_write_zeroes 00:34:42.576 ************************************ 00:34:42.576 11:45:58 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:42.576 [2024-07-25 11:45:58.365473] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:34:42.576 [2024-07-25 11:45:58.365673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106072 ] 00:34:42.834 [2024-07-25 11:45:58.541349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.092 [2024-07-25 11:45:58.777065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.678 Running I/O for 1 seconds... 00:34:44.613 00:34:44.613 Latency(us) 00:34:44.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.613 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:44.613 raid5f : 1.01 19936.86 77.88 0.00 0.00 6395.03 2070.34 7804.74 00:34:44.613 =================================================================================================================== 00:34:44.613 Total : 19936.86 77.88 0.00 0.00 6395.03 2070.34 7804.74 00:34:45.988 00:34:45.988 real 0m3.438s 00:34:45.988 user 0m2.976s 00:34:45.988 sys 0m0.329s 00:34:45.988 11:46:01 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:45.988 11:46:01 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:45.988 ************************************ 00:34:45.988 END TEST bdev_write_zeroes 00:34:45.988 ************************************ 00:34:45.988 11:46:01 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:45.988 11:46:01 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:34:45.988 11:46:01 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:45.988 11:46:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:45.988 ************************************ 00:34:45.988 START TEST bdev_json_nonenclosed 00:34:45.988 ************************************ 00:34:45.988 11:46:01 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:45.988 [2024-07-25 11:46:01.837437] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:34:45.988 [2024-07-25 11:46:01.837628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106121 ] 00:34:46.248 [2024-07-25 11:46:01.997095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.513 [2024-07-25 11:46:02.238242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.513 [2024-07-25 11:46:02.238390] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:46.513 [2024-07-25 11:46:02.238420] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:46.513 [2024-07-25 11:46:02.238438] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:47.080 00:34:47.080 real 0m0.917s 00:34:47.080 user 0m0.664s 00:34:47.080 sys 0m0.148s 00:34:47.080 11:46:02 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:47.080 11:46:02 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:47.080 ************************************ 00:34:47.080 END TEST bdev_json_nonenclosed 00:34:47.080 ************************************ 00:34:47.080 11:46:02 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:47.080 11:46:02 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:34:47.080 11:46:02 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:47.080 11:46:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:47.080 ************************************ 00:34:47.080 START TEST bdev_json_nonarray 00:34:47.080 ************************************ 00:34:47.080 11:46:02 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:47.080 [2024-07-25 11:46:02.817334] Starting SPDK v24.09-pre git sha1 86fd5638b / DPDK 24.03.0 initialization... 00:34:47.080 [2024-07-25 11:46:02.817546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106151 ] 00:34:47.338 [2024-07-25 11:46:02.992781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.338 [2024-07-25 11:46:03.215821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.338 [2024-07-25 11:46:03.215975] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:47.338 [2024-07-25 11:46:03.216007] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:47.338 [2024-07-25 11:46:03.216024] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:47.904 00:34:47.904 real 0m0.922s 00:34:47.904 user 0m0.664s 00:34:47.904 sys 0m0.151s 00:34:47.904 11:46:03 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:47.904 11:46:03 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:47.904 ************************************ 00:34:47.904 END TEST bdev_json_nonarray 00:34:47.904 ************************************ 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:34:47.904 11:46:03 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:34:47.904 00:34:47.904 real 0m49.853s 00:34:47.904 user 1m6.945s 00:34:47.904 sys 0m5.273s 00:34:47.904 11:46:03 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:47.904 11:46:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:47.904 ************************************ 00:34:47.904 END TEST blockdev_raid5f 00:34:47.904 ************************************ 00:34:47.904 11:46:03 -- spdk/autotest.sh@207 -- # uname -s 00:34:47.904 11:46:03 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:34:47.904 11:46:03 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:34:47.904 11:46:03 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:34:47.904 11:46:03 -- spdk/autotest.sh@220 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@269 -- # timing_exit lib 00:34:47.904 11:46:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:47.904 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:34:47.904 11:46:03 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@285 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@318 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@322 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@336 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@349 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@353 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@358 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@362 -- # '[' 0 -eq 1 ']' 00:34:47.904 11:46:03 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:34:47.904 11:46:03 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:34:47.904 11:46:03 -- spdk/autotest.sh@377 -- # [[ 0 -eq 1 ]] 00:34:47.904 11:46:03 -- spdk/autotest.sh@382 -- # trap - SIGINT SIGTERM EXIT 00:34:47.904 11:46:03 -- spdk/autotest.sh@384 -- # timing_enter post_cleanup 00:34:47.904 11:46:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:47.904 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:34:47.904 11:46:03 -- spdk/autotest.sh@385 -- # autotest_cleanup 00:34:47.904 11:46:03 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:47.904 11:46:03 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:47.904 11:46:03 -- common/autotest_common.sh@10 -- # set +x 00:34:49.278 INFO: APP EXITING 00:34:49.278 INFO: killing all VMs 00:34:49.278 INFO: killing vhost app 00:34:49.278 INFO: EXIT DONE 00:34:49.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:49.845 Waiting for block devices as requested 00:34:49.845 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:49.845 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:50.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:50.672 Cleaning 00:34:50.672 Removing: /var/run/dpdk/spdk0/config 00:34:50.672 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:50.672 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:50.672 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:50.672 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:50.672 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:50.672 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:50.672 Removing: /dev/shm/spdk_tgt_trace.pid58857 00:34:50.672 Removing: /var/run/dpdk/spdk0 00:34:50.672 Removing: /var/run/dpdk/spdk_pid100320 00:34:50.672 Removing: /var/run/dpdk/spdk_pid100851 00:34:50.672 Removing: /var/run/dpdk/spdk_pid102054 00:34:50.672 Removing: /var/run/dpdk/spdk_pid102569 00:34:50.672 Removing: /var/run/dpdk/spdk_pid103780 00:34:50.672 Removing: /var/run/dpdk/spdk_pid104293 00:34:50.672 Removing: /var/run/dpdk/spdk_pid105109 00:34:50.672 Removing: /var/run/dpdk/spdk_pid105361 00:34:50.672 Removing: /var/run/dpdk/spdk_pid105423 00:34:50.672 Removing: /var/run/dpdk/spdk_pid105471 00:34:50.672 Removing: /var/run/dpdk/spdk_pid105716 00:34:50.672 Removing: /var/run/dpdk/spdk_pid105886 00:34:50.672 Removing: /var/run/dpdk/spdk_pid105974 00:34:50.672 Removing: /var/run/dpdk/spdk_pid106072 00:34:50.672 Removing: /var/run/dpdk/spdk_pid106121 00:34:50.672 Removing: /var/run/dpdk/spdk_pid106151 00:34:50.672 Removing: /var/run/dpdk/spdk_pid58630 00:34:50.672 Removing: /var/run/dpdk/spdk_pid58857 00:34:50.672 Removing: /var/run/dpdk/spdk_pid59078 00:34:50.672 Removing: /var/run/dpdk/spdk_pid59182 00:34:50.672 Removing: /var/run/dpdk/spdk_pid59238 00:34:50.672 Removing: /var/run/dpdk/spdk_pid59366 00:34:50.672 Removing: /var/run/dpdk/spdk_pid59390 00:34:50.672 Removing: /var/run/dpdk/spdk_pid59571 00:34:50.672 Removing: /var/run/dpdk/spdk_pid59674 00:34:50.672 Removing: /var/run/dpdk/spdk_pid59773 00:34:50.672 Removing: /var/run/dpdk/spdk_pid59887 00:34:50.672 Removing: /var/run/dpdk/spdk_pid59987 00:34:50.672 Removing: /var/run/dpdk/spdk_pid60032 00:34:50.672 Removing: /var/run/dpdk/spdk_pid60074 00:34:50.672 Removing: /var/run/dpdk/spdk_pid60142 00:34:50.672 Removing: /var/run/dpdk/spdk_pid60237 00:34:50.672 Removing: /var/run/dpdk/spdk_pid60693 00:34:50.672 Removing: /var/run/dpdk/spdk_pid60768 00:34:50.672 Removing: /var/run/dpdk/spdk_pid60842 00:34:50.672 Removing: /var/run/dpdk/spdk_pid60863 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61016 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61037 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61186 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61208 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61278 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61296 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61366 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61384 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61571 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61613 00:34:50.672 Removing: /var/run/dpdk/spdk_pid61694 00:34:50.672 Removing: /var/run/dpdk/spdk_pid63365 00:34:50.672 Removing: /var/run/dpdk/spdk_pid63731 00:34:50.672 Removing: /var/run/dpdk/spdk_pid63923 00:34:50.672 Removing: /var/run/dpdk/spdk_pid64850 00:34:50.672 Removing: /var/run/dpdk/spdk_pid65222 00:34:50.672 Removing: /var/run/dpdk/spdk_pid65404 00:34:50.672 Removing: /var/run/dpdk/spdk_pid66337 00:34:50.672 Removing: /var/run/dpdk/spdk_pid66872 00:34:50.672 Removing: /var/run/dpdk/spdk_pid67060 00:34:50.672 Removing: /var/run/dpdk/spdk_pid69202 00:34:50.672 Removing: /var/run/dpdk/spdk_pid69688 00:34:50.672 Removing: /var/run/dpdk/spdk_pid69884 00:34:50.672 Removing: /var/run/dpdk/spdk_pid72036 00:34:50.672 Removing: /var/run/dpdk/spdk_pid72516 00:34:50.672 Removing: /var/run/dpdk/spdk_pid72714 00:34:50.672 Removing: /var/run/dpdk/spdk_pid74850 00:34:50.672 Removing: /var/run/dpdk/spdk_pid75593 00:34:50.672 Removing: /var/run/dpdk/spdk_pid75790 00:34:50.673 Removing: /var/run/dpdk/spdk_pid78198 00:34:50.673 Removing: /var/run/dpdk/spdk_pid78743 00:34:50.937 Removing: /var/run/dpdk/spdk_pid78955 00:34:50.937 Removing: /var/run/dpdk/spdk_pid81350 00:34:50.937 Removing: /var/run/dpdk/spdk_pid81895 00:34:50.937 Removing: /var/run/dpdk/spdk_pid82110 00:34:50.937 Removing: /var/run/dpdk/spdk_pid84502 00:34:50.937 Removing: /var/run/dpdk/spdk_pid85342 00:34:50.937 Removing: /var/run/dpdk/spdk_pid85553 00:34:50.937 Removing: /var/run/dpdk/spdk_pid85755 00:34:50.937 Removing: /var/run/dpdk/spdk_pid86281 00:34:50.937 Removing: /var/run/dpdk/spdk_pid87192 00:34:50.937 Removing: /var/run/dpdk/spdk_pid87655 00:34:50.937 Removing: /var/run/dpdk/spdk_pid88519 00:34:50.937 Removing: /var/run/dpdk/spdk_pid89067 00:34:50.937 Removing: /var/run/dpdk/spdk_pid90018 00:34:50.937 Removing: /var/run/dpdk/spdk_pid90505 00:34:50.937 Removing: /var/run/dpdk/spdk_pid93309 00:34:50.937 Removing: /var/run/dpdk/spdk_pid94024 00:34:50.937 Removing: /var/run/dpdk/spdk_pid94530 00:34:50.937 Removing: /var/run/dpdk/spdk_pid97585 00:34:50.937 Removing: /var/run/dpdk/spdk_pid98424 00:34:50.937 Removing: /var/run/dpdk/spdk_pid99000 00:34:50.937 Clean 00:34:50.937 11:46:06 -- common/autotest_common.sh@1451 -- # return 0 00:34:50.937 11:46:06 -- spdk/autotest.sh@386 -- # timing_exit post_cleanup 00:34:50.937 11:46:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.937 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:34:50.937 11:46:06 -- spdk/autotest.sh@388 -- # timing_exit autotest 00:34:50.937 11:46:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.937 11:46:06 -- common/autotest_common.sh@10 -- # set +x 00:34:50.937 11:46:06 -- spdk/autotest.sh@389 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:50.937 11:46:06 -- spdk/autotest.sh@391 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:50.937 11:46:06 -- spdk/autotest.sh@391 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:50.937 11:46:06 -- spdk/autotest.sh@393 -- # hash lcov 00:34:50.937 11:46:06 -- spdk/autotest.sh@393 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:50.937 11:46:06 -- spdk/autotest.sh@395 -- # hostname 00:34:50.937 11:46:06 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:51.196 geninfo: WARNING: invalid characters removed from testname! 00:35:17.739 11:46:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:18.672 11:46:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:21.220 11:46:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:23.748 11:46:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:27.032 11:46:42 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:28.944 11:46:44 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:32.232 11:46:47 -- spdk/autotest.sh@402 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:32.232 11:46:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:32.232 11:46:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:32.232 11:46:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.232 11:46:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.232 11:46:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.232 11:46:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.232 11:46:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.232 11:46:47 -- paths/export.sh@5 -- $ export PATH 00:35:32.232 11:46:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.232 11:46:47 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:35:32.232 11:46:47 -- common/autobuild_common.sh@447 -- $ date +%s 00:35:32.232 11:46:47 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721908007.XXXXXX 00:35:32.232 11:46:47 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721908007.EwGwiZ 00:35:32.232 11:46:47 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:35:32.232 11:46:47 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:35:32.232 11:46:47 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:35:32.232 11:46:47 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:35:32.232 11:46:47 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:35:32.232 11:46:47 -- common/autobuild_common.sh@463 -- $ get_config_params 00:35:32.232 11:46:47 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:35:32.232 11:46:47 -- common/autotest_common.sh@10 -- $ set +x 00:35:32.232 11:46:47 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:35:32.232 11:46:47 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:35:32.232 11:46:47 -- pm/common@17 -- $ local monitor 00:35:32.232 11:46:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:32.232 11:46:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:32.232 11:46:47 -- pm/common@21 -- $ date +%s 00:35:32.232 11:46:47 -- pm/common@25 -- $ sleep 1 00:35:32.232 11:46:47 -- pm/common@21 -- $ date +%s 00:35:32.232 11:46:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721908007 00:35:32.232 11:46:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721908007 00:35:32.232 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721908007_collect-vmstat.pm.log 00:35:32.232 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721908007_collect-cpu-load.pm.log 00:35:32.798 11:46:48 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:35:32.798 11:46:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:35:32.798 11:46:48 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:35:32.798 11:46:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:32.798 11:46:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:32.798 11:46:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:32.798 11:46:48 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:32.798 11:46:48 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:32.798 11:46:48 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:32.798 11:46:48 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:32.798 11:46:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:32.798 11:46:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:32.798 11:46:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:32.798 11:46:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:32.798 11:46:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:35:32.798 11:46:48 -- pm/common@44 -- $ pid=107606 00:35:32.798 11:46:48 -- pm/common@50 -- $ kill -TERM 107606 00:35:32.798 11:46:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:32.798 11:46:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:35:32.798 11:46:48 -- pm/common@44 -- $ pid=107607 00:35:32.798 11:46:48 -- pm/common@50 -- $ kill -TERM 107607 00:35:32.798 + [[ -n 5108 ]] 00:35:32.798 + sudo kill 5108 00:35:33.066 [Pipeline] } 00:35:33.085 [Pipeline] // timeout 00:35:33.091 [Pipeline] } 00:35:33.109 [Pipeline] // stage 00:35:33.115 [Pipeline] } 00:35:33.131 [Pipeline] // catchError 00:35:33.141 [Pipeline] stage 00:35:33.143 [Pipeline] { (Stop VM) 00:35:33.155 [Pipeline] sh 00:35:33.433 + vagrant halt 00:35:37.621 ==> default: Halting domain... 00:35:42.920 [Pipeline] sh 00:35:43.201 + vagrant destroy -f 00:35:47.389 ==> default: Removing domain... 00:35:47.404 [Pipeline] sh 00:35:47.690 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:35:47.704 [Pipeline] } 00:35:47.724 [Pipeline] // stage 00:35:47.728 [Pipeline] } 00:35:47.738 [Pipeline] // dir 00:35:47.742 [Pipeline] } 00:35:47.752 [Pipeline] // wrap 00:35:47.757 [Pipeline] } 00:35:47.765 [Pipeline] // catchError 00:35:47.771 [Pipeline] stage 00:35:47.773 [Pipeline] { (Epilogue) 00:35:47.782 [Pipeline] sh 00:35:48.055 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:54.629 [Pipeline] catchError 00:35:54.631 [Pipeline] { 00:35:54.647 [Pipeline] sh 00:35:54.927 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:54.927 Artifacts sizes are good 00:35:54.936 [Pipeline] } 00:35:54.954 [Pipeline] // catchError 00:35:54.966 [Pipeline] archiveArtifacts 00:35:54.973 Archiving artifacts 00:35:55.086 [Pipeline] cleanWs 00:35:55.096 [WS-CLEANUP] Deleting project workspace... 00:35:55.096 [WS-CLEANUP] Deferred wipeout is used... 00:35:55.103 [WS-CLEANUP] done 00:35:55.105 [Pipeline] } 00:35:55.123 [Pipeline] // stage 00:35:55.127 [Pipeline] } 00:35:55.143 [Pipeline] // node 00:35:55.147 [Pipeline] End of Pipeline 00:35:55.170 Finished: SUCCESS